00:00:00.001 Started by upstream project "autotest-nightly" build number 4261 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3624 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.270 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.270 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.428 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.439 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.451 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.451 > git config core.sparsecheckout # timeout=10 00:00:06.461 > git read-tree -mu HEAD # timeout=10 00:00:06.476 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.492 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.493 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.573 [Pipeline] Start of Pipeline 00:00:06.586 [Pipeline] library 00:00:06.588 Loading library shm_lib@master 00:00:06.588 Library shm_lib@master is cached. Copying from home. 00:00:06.602 [Pipeline] node 00:00:06.625 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.627 [Pipeline] { 00:00:06.636 [Pipeline] catchError 00:00:06.637 [Pipeline] { 00:00:06.647 [Pipeline] wrap 00:00:06.653 [Pipeline] { 00:00:06.658 [Pipeline] stage 00:00:06.659 [Pipeline] { (Prologue) 00:00:06.889 [Pipeline] sh 00:00:07.171 + logger -p user.info -t JENKINS-CI 00:00:07.195 [Pipeline] echo 00:00:07.196 Node: GP11 00:00:07.205 [Pipeline] sh 00:00:07.509 [Pipeline] setCustomBuildProperty 00:00:07.520 [Pipeline] echo 00:00:07.521 Cleanup processes 00:00:07.525 [Pipeline] sh 00:00:07.807 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.807 3241716 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.821 [Pipeline] sh 00:00:08.109 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.109 ++ awk '{print $1}' 00:00:08.109 ++ grep -v 'sudo pgrep' 00:00:08.109 + sudo kill -9 00:00:08.109 + true 00:00:08.125 [Pipeline] cleanWs 00:00:08.138 [WS-CLEANUP] Deleting project workspace... 00:00:08.138 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.145 [WS-CLEANUP] done 00:00:08.149 [Pipeline] setCustomBuildProperty 00:00:08.162 [Pipeline] sh 00:00:08.448 + sudo git config --global --replace-all safe.directory '*' 00:00:08.544 [Pipeline] httpRequest 00:00:09.814 [Pipeline] echo 00:00:09.816 Sorcerer 10.211.164.101 is alive 00:00:09.826 [Pipeline] retry 00:00:09.828 [Pipeline] { 00:00:09.843 [Pipeline] httpRequest 00:00:09.851 HttpMethod: GET 00:00:09.852 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.852 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.872 Response Code: HTTP/1.1 200 OK 00:00:09.873 Success: Status code 200 is in the accepted range: 200,404 00:00:09.873 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:36.273 [Pipeline] } 00:00:36.290 [Pipeline] // retry 00:00:36.297 [Pipeline] sh 00:00:36.582 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:36.599 [Pipeline] httpRequest 00:00:37.041 [Pipeline] echo 00:00:37.046 Sorcerer 10.211.164.101 is alive 00:00:37.085 [Pipeline] retry 00:00:37.086 [Pipeline] { 00:00:37.093 [Pipeline] httpRequest 00:00:37.097 HttpMethod: GET 00:00:37.097 URL: http://10.211.164.101/packages/spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:00:37.098 Sending request to url: http://10.211.164.101/packages/spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:00:37.108 Response Code: HTTP/1.1 200 OK 00:00:37.108 Success: Status code 200 is in the accepted range: 200,404 00:00:37.109 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:01:31.041 [Pipeline] } 00:01:31.059 [Pipeline] // retry 00:01:31.066 [Pipeline] sh 00:01:31.352 + tar --no-same-owner -xf spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:01:33.893 [Pipeline] sh 00:01:34.177 + git -C spdk log --oneline -n5 00:01:34.178 06bc8ce53 lib/vhost: use RB_TREE for vhost device management 00:01:34.178 b264e22f0 accel/error: fix callback type for tasks in a sequence 00:01:34.178 0732c1430 accel/error: don't submit tasks intended to fail 00:01:34.178 b53b961c8 accel/error: move interval check to a function 00:01:34.178 c9f92cbfa accel/error: check interval before submission 00:01:34.189 [Pipeline] } 00:01:34.202 [Pipeline] // stage 00:01:34.211 [Pipeline] stage 00:01:34.213 [Pipeline] { (Prepare) 00:01:34.229 [Pipeline] writeFile 00:01:34.244 [Pipeline] sh 00:01:34.551 + logger -p user.info -t JENKINS-CI 00:01:34.564 [Pipeline] sh 00:01:34.848 + logger -p user.info -t JENKINS-CI 00:01:34.860 [Pipeline] sh 00:01:35.144 + cat autorun-spdk.conf 00:01:35.145 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.145 SPDK_TEST_NVMF=1 00:01:35.145 SPDK_TEST_NVME_CLI=1 00:01:35.145 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.145 SPDK_TEST_NVMF_NICS=e810 00:01:35.145 SPDK_RUN_ASAN=1 00:01:35.145 SPDK_RUN_UBSAN=1 00:01:35.145 NET_TYPE=phy 00:01:35.152 RUN_NIGHTLY=1 00:01:35.156 [Pipeline] readFile 00:01:35.178 [Pipeline] withEnv 00:01:35.180 [Pipeline] { 00:01:35.192 [Pipeline] sh 00:01:35.479 + set -ex 00:01:35.479 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:35.479 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.479 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.479 ++ SPDK_TEST_NVMF=1 00:01:35.479 ++ SPDK_TEST_NVME_CLI=1 00:01:35.479 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.479 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.479 ++ SPDK_RUN_ASAN=1 00:01:35.479 ++ SPDK_RUN_UBSAN=1 00:01:35.479 ++ NET_TYPE=phy 00:01:35.479 ++ RUN_NIGHTLY=1 00:01:35.479 + case $SPDK_TEST_NVMF_NICS in 00:01:35.479 + DRIVERS=ice 00:01:35.479 + [[ tcp == \r\d\m\a ]] 00:01:35.479 + [[ -n ice ]] 00:01:35.479 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:35.479 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.479 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:35.479 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.479 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.479 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.479 + true 00:01:35.479 + for D in $DRIVERS 00:01:35.479 + sudo modprobe ice 00:01:35.479 + exit 0 00:01:35.490 [Pipeline] } 00:01:35.505 [Pipeline] // withEnv 00:01:35.510 [Pipeline] } 00:01:35.524 [Pipeline] // stage 00:01:35.533 [Pipeline] catchError 00:01:35.535 [Pipeline] { 00:01:35.548 [Pipeline] timeout 00:01:35.548 Timeout set to expire in 1 hr 0 min 00:01:35.550 [Pipeline] { 00:01:35.563 [Pipeline] stage 00:01:35.565 [Pipeline] { (Tests) 00:01:35.579 [Pipeline] sh 00:01:35.865 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.865 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.865 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.865 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:35.865 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.865 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.865 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:35.865 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.865 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.865 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.865 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:35.865 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.865 + source /etc/os-release 00:01:35.865 ++ NAME='Fedora Linux' 00:01:35.865 ++ VERSION='39 (Cloud Edition)' 00:01:35.865 ++ ID=fedora 00:01:35.865 ++ VERSION_ID=39 00:01:35.865 ++ VERSION_CODENAME= 00:01:35.865 ++ PLATFORM_ID=platform:f39 00:01:35.865 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.865 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.865 ++ LOGO=fedora-logo-icon 00:01:35.865 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.865 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.865 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.865 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.865 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.865 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.865 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.865 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.865 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.865 ++ SUPPORT_END=2024-11-12 00:01:35.865 ++ VARIANT='Cloud Edition' 00:01:35.865 ++ VARIANT_ID=cloud 00:01:35.865 + uname -a 00:01:35.865 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:35.865 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:36.805 Hugepages 00:01:36.805 node hugesize free / total 00:01:36.805 node0 1048576kB 0 / 0 00:01:36.805 node0 2048kB 0 / 0 00:01:36.805 node1 1048576kB 0 / 0 00:01:36.805 node1 2048kB 0 / 0 00:01:36.805 00:01:36.805 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.805 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:36.805 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:36.805 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:36.805 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:36.805 + rm -f /tmp/spdk-ld-path 00:01:36.805 + source autorun-spdk.conf 00:01:36.805 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.805 ++ SPDK_TEST_NVMF=1 00:01:36.805 ++ SPDK_TEST_NVME_CLI=1 00:01:36.805 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.805 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.805 ++ SPDK_RUN_ASAN=1 00:01:36.805 ++ SPDK_RUN_UBSAN=1 00:01:36.805 ++ NET_TYPE=phy 00:01:36.805 ++ RUN_NIGHTLY=1 00:01:36.805 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.805 + [[ -n '' ]] 00:01:36.805 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.805 + for M in /var/spdk/build-*-manifest.txt 00:01:36.805 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.805 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.805 + for M in /var/spdk/build-*-manifest.txt 00:01:36.805 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.805 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.805 + for M in /var/spdk/build-*-manifest.txt 00:01:36.805 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.805 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.805 ++ uname 00:01:36.805 + [[ Linux == \L\i\n\u\x ]] 00:01:36.805 + sudo dmesg -T 00:01:37.064 + sudo dmesg --clear 00:01:37.064 + dmesg_pid=3242410 00:01:37.064 + [[ Fedora Linux == FreeBSD ]] 00:01:37.064 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.064 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.064 + sudo dmesg -Tw 00:01:37.064 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.064 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.064 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.064 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.064 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.064 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.064 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.064 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.064 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.064 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.064 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.064 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.064 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.064 23:35:03 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:37.064 23:35:03 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:37.064 23:35:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:37.064 23:35:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.064 23:35:03 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.064 23:35:03 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:37.064 23:35:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:37.064 23:35:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.064 23:35:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.064 23:35:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.064 23:35:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.064 23:35:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.064 23:35:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.064 23:35:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.064 23:35:03 -- paths/export.sh@5 -- $ export PATH 00:01:37.064 23:35:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.064 23:35:03 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.064 23:35:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:37.064 23:35:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731191703.XXXXXX 00:01:37.064 23:35:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731191703.FIHe6P 00:01:37.064 23:35:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:37.064 23:35:03 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:37.064 23:35:03 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:37.064 23:35:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.064 23:35:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.064 23:35:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:37.064 23:35:03 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:37.064 23:35:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.064 23:35:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:37.064 23:35:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:37.064 23:35:03 -- pm/common@17 -- $ local monitor 00:01:37.064 23:35:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.064 23:35:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.064 23:35:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.064 23:35:03 -- pm/common@21 -- $ date +%s 00:01:37.064 23:35:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.064 23:35:03 -- pm/common@21 -- $ date +%s 00:01:37.064 23:35:03 -- pm/common@25 -- $ sleep 1 00:01:37.064 23:35:03 -- pm/common@21 -- $ date +%s 00:01:37.064 23:35:03 -- pm/common@21 -- $ date +%s 00:01:37.064 23:35:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731191703 00:01:37.064 23:35:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731191703 00:01:37.064 23:35:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731191703 00:01:37.064 23:35:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731191703 00:01:37.064 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731191703_collect-cpu-load.pm.log 00:01:37.064 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731191703_collect-vmstat.pm.log 00:01:37.064 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731191703_collect-cpu-temp.pm.log 00:01:37.064 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731191703_collect-bmc-pm.bmc.pm.log 00:01:38.004 23:35:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:38.004 23:35:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.004 23:35:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.004 23:35:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.004 23:35:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.004 Sat Nov 9 10:35:04 PM UTC 2024 00:01:38.004 23:35:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.004 v25.01-pre-176-g06bc8ce53 00:01:38.004 23:35:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:38.004 23:35:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:38.004 23:35:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:38.004 23:35:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:38.004 23:35:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.004 ************************************ 00:01:38.004 START TEST asan 00:01:38.004 ************************************ 00:01:38.004 23:35:04 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:38.004 using asan 00:01:38.004 00:01:38.004 real 0m0.000s 00:01:38.004 user 0m0.000s 00:01:38.004 sys 0m0.000s 00:01:38.004 23:35:04 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:38.004 23:35:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.004 ************************************ 00:01:38.004 END TEST asan 00:01:38.004 ************************************ 00:01:38.004 23:35:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.004 23:35:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.004 23:35:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:38.004 23:35:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:38.004 23:35:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.262 ************************************ 00:01:38.262 START TEST ubsan 00:01:38.262 ************************************ 00:01:38.262 23:35:04 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:38.262 using ubsan 00:01:38.262 00:01:38.262 real 0m0.000s 00:01:38.262 user 0m0.000s 00:01:38.262 sys 0m0.000s 00:01:38.262 23:35:04 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:38.262 23:35:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.262 ************************************ 00:01:38.262 END TEST ubsan 00:01:38.262 ************************************ 00:01:38.262 23:35:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.263 23:35:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.263 23:35:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.263 23:35:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.263 23:35:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.263 23:35:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.263 23:35:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.263 23:35:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.263 23:35:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:38.263 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:38.263 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:38.521 Using 'verbs' RDMA provider 00:01:49.071 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:59.064 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:59.064 Creating mk/config.mk...done. 00:01:59.064 Creating mk/cc.flags.mk...done. 00:01:59.064 Type 'make' to build. 00:01:59.064 23:35:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:59.064 23:35:24 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:59.064 23:35:24 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:59.064 23:35:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.064 ************************************ 00:01:59.064 START TEST make 00:01:59.064 ************************************ 00:01:59.064 23:35:24 make -- common/autotest_common.sh@1127 -- $ make -j48 00:01:59.064 make[1]: Nothing to be done for 'all'. 00:02:09.117 The Meson build system 00:02:09.117 Version: 1.5.0 00:02:09.117 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:09.117 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:09.117 Build type: native build 00:02:09.117 Program cat found: YES (/usr/bin/cat) 00:02:09.117 Project name: DPDK 00:02:09.117 Project version: 24.03.0 00:02:09.117 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.117 C linker for the host machine: cc ld.bfd 2.40-14 00:02:09.117 Host machine cpu family: x86_64 00:02:09.117 Host machine cpu: x86_64 00:02:09.117 Message: ## Building in Developer Mode ## 00:02:09.117 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.117 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.117 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.117 Program python3 found: YES (/usr/bin/python3) 00:02:09.117 Program cat found: YES (/usr/bin/cat) 00:02:09.117 Compiler for C supports arguments -march=native: YES 00:02:09.117 Checking for size of "void *" : 8 00:02:09.117 Checking for size of "void *" : 8 (cached) 00:02:09.117 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:09.117 Library m found: YES 00:02:09.117 Library numa found: YES 00:02:09.117 Has header "numaif.h" : YES 00:02:09.117 Library fdt found: NO 00:02:09.117 Library execinfo found: NO 00:02:09.117 Has header "execinfo.h" : YES 00:02:09.117 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.117 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.117 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.117 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.117 Run-time dependency openssl found: YES 3.1.1 00:02:09.117 Run-time dependency libpcap found: YES 1.10.4 00:02:09.117 Has header "pcap.h" with dependency libpcap: YES 00:02:09.117 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.117 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.117 Compiler for C supports arguments -Wformat: YES 00:02:09.117 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.117 Compiler for C supports arguments -Wformat-security: NO 00:02:09.117 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.117 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.117 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.117 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.117 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.117 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.117 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.117 Compiler for C supports arguments -Wundef: YES 00:02:09.117 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.117 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.117 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.117 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.117 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.117 Program objdump found: YES (/usr/bin/objdump) 00:02:09.117 Compiler for C supports arguments -mavx512f: YES 00:02:09.117 Checking if "AVX512 checking" compiles: YES 00:02:09.117 Fetching value of define "__SSE4_2__" : 1 00:02:09.117 Fetching value of define "__AES__" : 1 00:02:09.117 Fetching value of define "__AVX__" : 1 00:02:09.117 Fetching value of define "__AVX2__" : (undefined) 00:02:09.117 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.117 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.117 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.117 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.117 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.117 Fetching value of define "__PCLMUL__" : 1 00:02:09.117 Fetching value of define "__RDRND__" : 1 00:02:09.117 Fetching value of define "__RDSEED__" : (undefined) 00:02:09.117 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.117 Fetching value of define "__znver1__" : (undefined) 00:02:09.117 Fetching value of define "__znver2__" : (undefined) 00:02:09.117 Fetching value of define "__znver3__" : (undefined) 00:02:09.117 Fetching value of define "__znver4__" : (undefined) 00:02:09.117 Library asan found: YES 00:02:09.117 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.117 Message: lib/log: Defining dependency "log" 00:02:09.117 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.117 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.117 Library rt found: YES 00:02:09.117 Checking for function "getentropy" : NO 00:02:09.117 Message: lib/eal: Defining dependency "eal" 00:02:09.117 Message: lib/ring: Defining dependency "ring" 00:02:09.117 Message: lib/rcu: Defining dependency "rcu" 00:02:09.117 Message: lib/mempool: Defining dependency "mempool" 00:02:09.117 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.117 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.117 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.117 Compiler for C supports arguments -mpclmul: YES 00:02:09.117 Compiler for C supports arguments -maes: YES 00:02:09.117 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.117 Compiler for C supports arguments -mavx512bw: YES 00:02:09.117 Compiler for C supports arguments -mavx512dq: YES 00:02:09.117 Compiler for C supports arguments -mavx512vl: YES 00:02:09.117 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.117 Compiler for C supports arguments -mavx2: YES 00:02:09.117 Compiler for C supports arguments -mavx: YES 00:02:09.117 Message: lib/net: Defining dependency "net" 00:02:09.117 Message: lib/meter: Defining dependency "meter" 00:02:09.117 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.117 Message: lib/pci: Defining dependency "pci" 00:02:09.117 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.117 Message: lib/hash: Defining dependency "hash" 00:02:09.117 Message: lib/timer: Defining dependency "timer" 00:02:09.117 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.117 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.117 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.117 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.117 Message: lib/power: Defining dependency "power" 00:02:09.117 Message: lib/reorder: Defining dependency "reorder" 00:02:09.117 Message: lib/security: Defining dependency "security" 00:02:09.117 Has header "linux/userfaultfd.h" : YES 00:02:09.117 Has header "linux/vduse.h" : YES 00:02:09.117 Message: lib/vhost: Defining dependency "vhost" 00:02:09.117 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.117 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.117 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.117 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.117 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.117 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.117 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.117 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.117 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.117 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.117 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.117 Configuring doxy-api-html.conf using configuration 00:02:09.117 Configuring doxy-api-man.conf using configuration 00:02:09.117 Program mandb found: YES (/usr/bin/mandb) 00:02:09.117 Program sphinx-build found: NO 00:02:09.117 Configuring rte_build_config.h using configuration 00:02:09.117 Message: 00:02:09.118 ================= 00:02:09.118 Applications Enabled 00:02:09.118 ================= 00:02:09.118 00:02:09.118 apps: 00:02:09.118 00:02:09.118 00:02:09.118 Message: 00:02:09.118 ================= 00:02:09.118 Libraries Enabled 00:02:09.118 ================= 00:02:09.118 00:02:09.118 libs: 00:02:09.118 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.118 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.118 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.118 00:02:09.118 Message: 00:02:09.118 =============== 00:02:09.118 Drivers Enabled 00:02:09.118 =============== 00:02:09.118 00:02:09.118 common: 00:02:09.118 00:02:09.118 bus: 00:02:09.118 pci, vdev, 00:02:09.118 mempool: 00:02:09.118 ring, 00:02:09.118 dma: 00:02:09.118 00:02:09.118 net: 00:02:09.118 00:02:09.118 crypto: 00:02:09.118 00:02:09.118 compress: 00:02:09.118 00:02:09.118 vdpa: 00:02:09.118 00:02:09.118 00:02:09.118 Message: 00:02:09.118 ================= 00:02:09.118 Content Skipped 00:02:09.118 ================= 00:02:09.118 00:02:09.118 apps: 00:02:09.118 dumpcap: explicitly disabled via build config 00:02:09.118 graph: explicitly disabled via build config 00:02:09.118 pdump: explicitly disabled via build config 00:02:09.118 proc-info: explicitly disabled via build config 00:02:09.118 test-acl: explicitly disabled via build config 00:02:09.118 test-bbdev: explicitly disabled via build config 00:02:09.118 test-cmdline: explicitly disabled via build config 00:02:09.118 test-compress-perf: explicitly disabled via build config 00:02:09.118 test-crypto-perf: explicitly disabled via build config 00:02:09.118 test-dma-perf: explicitly disabled via build config 00:02:09.118 test-eventdev: explicitly disabled via build config 00:02:09.118 test-fib: explicitly disabled via build config 00:02:09.118 test-flow-perf: explicitly disabled via build config 00:02:09.118 test-gpudev: explicitly disabled via build config 00:02:09.118 test-mldev: explicitly disabled via build config 00:02:09.118 test-pipeline: explicitly disabled via build config 00:02:09.118 test-pmd: explicitly disabled via build config 00:02:09.118 test-regex: explicitly disabled via build config 00:02:09.118 test-sad: explicitly disabled via build config 00:02:09.118 test-security-perf: explicitly disabled via build config 00:02:09.118 00:02:09.118 libs: 00:02:09.118 argparse: explicitly disabled via build config 00:02:09.118 metrics: explicitly disabled via build config 00:02:09.118 acl: explicitly disabled via build config 00:02:09.118 bbdev: explicitly disabled via build config 00:02:09.118 bitratestats: explicitly disabled via build config 00:02:09.118 bpf: explicitly disabled via build config 00:02:09.118 cfgfile: explicitly disabled via build config 00:02:09.118 distributor: explicitly disabled via build config 00:02:09.118 efd: explicitly disabled via build config 00:02:09.118 eventdev: explicitly disabled via build config 00:02:09.118 dispatcher: explicitly disabled via build config 00:02:09.118 gpudev: explicitly disabled via build config 00:02:09.118 gro: explicitly disabled via build config 00:02:09.118 gso: explicitly disabled via build config 00:02:09.118 ip_frag: explicitly disabled via build config 00:02:09.118 jobstats: explicitly disabled via build config 00:02:09.118 latencystats: explicitly disabled via build config 00:02:09.118 lpm: explicitly disabled via build config 00:02:09.118 member: explicitly disabled via build config 00:02:09.118 pcapng: explicitly disabled via build config 00:02:09.118 rawdev: explicitly disabled via build config 00:02:09.118 regexdev: explicitly disabled via build config 00:02:09.118 mldev: explicitly disabled via build config 00:02:09.118 rib: explicitly disabled via build config 00:02:09.118 sched: explicitly disabled via build config 00:02:09.118 stack: explicitly disabled via build config 00:02:09.118 ipsec: explicitly disabled via build config 00:02:09.118 pdcp: explicitly disabled via build config 00:02:09.118 fib: explicitly disabled via build config 00:02:09.118 port: explicitly disabled via build config 00:02:09.118 pdump: explicitly disabled via build config 00:02:09.118 table: explicitly disabled via build config 00:02:09.118 pipeline: explicitly disabled via build config 00:02:09.118 graph: explicitly disabled via build config 00:02:09.118 node: explicitly disabled via build config 00:02:09.118 00:02:09.118 drivers: 00:02:09.118 common/cpt: not in enabled drivers build config 00:02:09.118 common/dpaax: not in enabled drivers build config 00:02:09.118 common/iavf: not in enabled drivers build config 00:02:09.118 common/idpf: not in enabled drivers build config 00:02:09.118 common/ionic: not in enabled drivers build config 00:02:09.118 common/mvep: not in enabled drivers build config 00:02:09.118 common/octeontx: not in enabled drivers build config 00:02:09.118 bus/auxiliary: not in enabled drivers build config 00:02:09.118 bus/cdx: not in enabled drivers build config 00:02:09.118 bus/dpaa: not in enabled drivers build config 00:02:09.118 bus/fslmc: not in enabled drivers build config 00:02:09.118 bus/ifpga: not in enabled drivers build config 00:02:09.118 bus/platform: not in enabled drivers build config 00:02:09.118 bus/uacce: not in enabled drivers build config 00:02:09.118 bus/vmbus: not in enabled drivers build config 00:02:09.118 common/cnxk: not in enabled drivers build config 00:02:09.118 common/mlx5: not in enabled drivers build config 00:02:09.118 common/nfp: not in enabled drivers build config 00:02:09.118 common/nitrox: not in enabled drivers build config 00:02:09.118 common/qat: not in enabled drivers build config 00:02:09.118 common/sfc_efx: not in enabled drivers build config 00:02:09.118 mempool/bucket: not in enabled drivers build config 00:02:09.118 mempool/cnxk: not in enabled drivers build config 00:02:09.118 mempool/dpaa: not in enabled drivers build config 00:02:09.118 mempool/dpaa2: not in enabled drivers build config 00:02:09.118 mempool/octeontx: not in enabled drivers build config 00:02:09.118 mempool/stack: not in enabled drivers build config 00:02:09.118 dma/cnxk: not in enabled drivers build config 00:02:09.118 dma/dpaa: not in enabled drivers build config 00:02:09.118 dma/dpaa2: not in enabled drivers build config 00:02:09.118 dma/hisilicon: not in enabled drivers build config 00:02:09.118 dma/idxd: not in enabled drivers build config 00:02:09.118 dma/ioat: not in enabled drivers build config 00:02:09.118 dma/skeleton: not in enabled drivers build config 00:02:09.118 net/af_packet: not in enabled drivers build config 00:02:09.118 net/af_xdp: not in enabled drivers build config 00:02:09.118 net/ark: not in enabled drivers build config 00:02:09.118 net/atlantic: not in enabled drivers build config 00:02:09.118 net/avp: not in enabled drivers build config 00:02:09.118 net/axgbe: not in enabled drivers build config 00:02:09.118 net/bnx2x: not in enabled drivers build config 00:02:09.118 net/bnxt: not in enabled drivers build config 00:02:09.118 net/bonding: not in enabled drivers build config 00:02:09.118 net/cnxk: not in enabled drivers build config 00:02:09.118 net/cpfl: not in enabled drivers build config 00:02:09.118 net/cxgbe: not in enabled drivers build config 00:02:09.118 net/dpaa: not in enabled drivers build config 00:02:09.118 net/dpaa2: not in enabled drivers build config 00:02:09.118 net/e1000: not in enabled drivers build config 00:02:09.118 net/ena: not in enabled drivers build config 00:02:09.118 net/enetc: not in enabled drivers build config 00:02:09.118 net/enetfec: not in enabled drivers build config 00:02:09.118 net/enic: not in enabled drivers build config 00:02:09.118 net/failsafe: not in enabled drivers build config 00:02:09.118 net/fm10k: not in enabled drivers build config 00:02:09.118 net/gve: not in enabled drivers build config 00:02:09.118 net/hinic: not in enabled drivers build config 00:02:09.118 net/hns3: not in enabled drivers build config 00:02:09.118 net/i40e: not in enabled drivers build config 00:02:09.118 net/iavf: not in enabled drivers build config 00:02:09.118 net/ice: not in enabled drivers build config 00:02:09.118 net/idpf: not in enabled drivers build config 00:02:09.118 net/igc: not in enabled drivers build config 00:02:09.118 net/ionic: not in enabled drivers build config 00:02:09.118 net/ipn3ke: not in enabled drivers build config 00:02:09.118 net/ixgbe: not in enabled drivers build config 00:02:09.118 net/mana: not in enabled drivers build config 00:02:09.118 net/memif: not in enabled drivers build config 00:02:09.118 net/mlx4: not in enabled drivers build config 00:02:09.118 net/mlx5: not in enabled drivers build config 00:02:09.118 net/mvneta: not in enabled drivers build config 00:02:09.118 net/mvpp2: not in enabled drivers build config 00:02:09.118 net/netvsc: not in enabled drivers build config 00:02:09.118 net/nfb: not in enabled drivers build config 00:02:09.118 net/nfp: not in enabled drivers build config 00:02:09.118 net/ngbe: not in enabled drivers build config 00:02:09.118 net/null: not in enabled drivers build config 00:02:09.118 net/octeontx: not in enabled drivers build config 00:02:09.118 net/octeon_ep: not in enabled drivers build config 00:02:09.118 net/pcap: not in enabled drivers build config 00:02:09.118 net/pfe: not in enabled drivers build config 00:02:09.118 net/qede: not in enabled drivers build config 00:02:09.118 net/ring: not in enabled drivers build config 00:02:09.118 net/sfc: not in enabled drivers build config 00:02:09.118 net/softnic: not in enabled drivers build config 00:02:09.118 net/tap: not in enabled drivers build config 00:02:09.118 net/thunderx: not in enabled drivers build config 00:02:09.118 net/txgbe: not in enabled drivers build config 00:02:09.118 net/vdev_netvsc: not in enabled drivers build config 00:02:09.118 net/vhost: not in enabled drivers build config 00:02:09.118 net/virtio: not in enabled drivers build config 00:02:09.118 net/vmxnet3: not in enabled drivers build config 00:02:09.118 raw/*: missing internal dependency, "rawdev" 00:02:09.118 crypto/armv8: not in enabled drivers build config 00:02:09.118 crypto/bcmfs: not in enabled drivers build config 00:02:09.118 crypto/caam_jr: not in enabled drivers build config 00:02:09.118 crypto/ccp: not in enabled drivers build config 00:02:09.118 crypto/cnxk: not in enabled drivers build config 00:02:09.118 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.118 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.118 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.118 crypto/mlx5: not in enabled drivers build config 00:02:09.118 crypto/mvsam: not in enabled drivers build config 00:02:09.118 crypto/nitrox: not in enabled drivers build config 00:02:09.119 crypto/null: not in enabled drivers build config 00:02:09.119 crypto/octeontx: not in enabled drivers build config 00:02:09.119 crypto/openssl: not in enabled drivers build config 00:02:09.119 crypto/scheduler: not in enabled drivers build config 00:02:09.119 crypto/uadk: not in enabled drivers build config 00:02:09.119 crypto/virtio: not in enabled drivers build config 00:02:09.119 compress/isal: not in enabled drivers build config 00:02:09.119 compress/mlx5: not in enabled drivers build config 00:02:09.119 compress/nitrox: not in enabled drivers build config 00:02:09.119 compress/octeontx: not in enabled drivers build config 00:02:09.119 compress/zlib: not in enabled drivers build config 00:02:09.119 regex/*: missing internal dependency, "regexdev" 00:02:09.119 ml/*: missing internal dependency, "mldev" 00:02:09.119 vdpa/ifc: not in enabled drivers build config 00:02:09.119 vdpa/mlx5: not in enabled drivers build config 00:02:09.119 vdpa/nfp: not in enabled drivers build config 00:02:09.119 vdpa/sfc: not in enabled drivers build config 00:02:09.119 event/*: missing internal dependency, "eventdev" 00:02:09.119 baseband/*: missing internal dependency, "bbdev" 00:02:09.119 gpu/*: missing internal dependency, "gpudev" 00:02:09.119 00:02:09.119 00:02:09.119 Build targets in project: 85 00:02:09.119 00:02:09.119 DPDK 24.03.0 00:02:09.119 00:02:09.119 User defined options 00:02:09.119 buildtype : debug 00:02:09.119 default_library : shared 00:02:09.119 libdir : lib 00:02:09.119 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:09.119 b_sanitize : address 00:02:09.119 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:09.119 c_link_args : 00:02:09.119 cpu_instruction_set: native 00:02:09.119 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:09.119 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:09.119 enable_docs : false 00:02:09.119 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:09.119 enable_kmods : false 00:02:09.119 max_lcores : 128 00:02:09.119 tests : false 00:02:09.119 00:02:09.119 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.119 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:09.119 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.119 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.119 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.119 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.119 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.119 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.119 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.119 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.119 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.119 [10/268] Linking static target lib/librte_kvargs.a 00:02:09.119 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.119 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.119 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.119 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.119 [15/268] Linking static target lib/librte_log.a 00:02:09.119 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:09.690 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.690 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.690 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.690 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.690 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.690 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.690 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.690 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.690 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.690 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.690 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.690 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.690 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.690 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.690 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.690 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.690 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.690 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.690 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.690 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.690 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.690 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.690 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.690 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.690 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.690 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.690 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.690 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.690 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.690 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:09.690 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:09.690 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:09.690 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:09.690 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.954 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.954 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.954 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:09.954 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.954 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:09.954 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.954 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.954 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.954 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.954 [60/268] Linking static target lib/librte_telemetry.a 00:02:09.954 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.954 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.217 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.217 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.217 [65/268] Linking target lib/librte_log.so.24.1 00:02:10.217 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.484 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.484 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:10.484 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.484 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.484 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.484 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:10.484 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:10.748 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.748 [75/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.748 [76/268] Linking static target lib/librte_pci.a 00:02:10.748 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.748 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.748 [79/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.748 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.748 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.748 [82/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.748 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.748 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:10.748 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.748 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:10.748 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.748 [88/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.748 [89/268] Linking static target lib/librte_ring.a 00:02:10.748 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.748 [91/268] Linking static target lib/librte_meter.a 00:02:10.748 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:10.748 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.748 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.748 [95/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.748 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.748 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.748 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.748 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.748 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:10.748 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:10.748 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.010 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.010 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.010 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.010 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.010 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.010 [108/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:11.010 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.010 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.010 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.010 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.010 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.010 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.010 [115/268] Linking static target lib/librte_mempool.a 00:02:11.010 [116/268] Linking target lib/librte_telemetry.so.24.1 00:02:11.273 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.273 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.273 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.273 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.273 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.273 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.273 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.273 [124/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.273 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.273 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.273 [127/268] Linking static target lib/librte_rcu.a 00:02:11.273 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.273 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.534 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:11.534 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.534 [132/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.534 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.534 [134/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.534 [135/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.796 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.796 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.796 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.796 [139/268] Linking static target lib/librte_cmdline.a 00:02:11.796 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.796 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:11.796 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.796 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.796 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.796 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.796 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.796 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.058 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.058 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.058 [150/268] Linking static target lib/librte_timer.a 00:02:12.058 [151/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.058 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.058 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.058 [154/268] Linking static target lib/librte_eal.a 00:02:12.058 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.058 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.058 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.058 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.058 [159/268] Linking static target lib/librte_dmadev.a 00:02:12.317 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.317 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.317 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.317 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.577 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:12.577 [165/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.577 [166/268] Linking static target lib/librte_net.a 00:02:12.577 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.577 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.577 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.577 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:12.577 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.577 [172/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.577 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.577 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.577 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.577 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.577 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:12.835 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.835 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.835 [180/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.835 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.835 [182/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.835 [183/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.835 [184/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.835 [185/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.835 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.835 [187/268] Linking static target lib/librte_hash.a 00:02:12.835 [188/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.836 [189/268] Linking static target lib/librte_power.a 00:02:13.094 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.094 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.094 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.094 [193/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.094 [194/268] Linking static target drivers/librte_bus_vdev.a 00:02:13.094 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.094 [196/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.094 [197/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.094 [198/268] Linking static target drivers/librte_bus_pci.a 00:02:13.094 [199/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.094 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.094 [201/268] Linking static target lib/librte_compressdev.a 00:02:13.094 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.352 [203/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.352 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.352 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.352 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.352 [207/268] Linking static target drivers/librte_mempool_ring.a 00:02:13.352 [208/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.352 [209/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.352 [210/268] Linking static target lib/librte_reorder.a 00:02:13.352 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.352 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.616 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.616 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.616 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.184 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.184 [217/268] Linking static target lib/librte_security.a 00:02:14.441 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.441 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.376 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.376 [221/268] Linking static target lib/librte_mbuf.a 00:02:15.376 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.376 [223/268] Linking static target lib/librte_cryptodev.a 00:02:15.634 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.568 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.568 [226/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.568 [227/268] Linking static target lib/librte_ethdev.a 00:02:17.941 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.941 [229/268] Linking target lib/librte_eal.so.24.1 00:02:18.198 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:18.198 [231/268] Linking target lib/librte_ring.so.24.1 00:02:18.198 [232/268] Linking target lib/librte_pci.so.24.1 00:02:18.198 [233/268] Linking target lib/librte_meter.so.24.1 00:02:18.198 [234/268] Linking target lib/librte_timer.so.24.1 00:02:18.198 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:18.198 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.456 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:18.456 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.456 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.456 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.456 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:18.456 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:18.456 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:18.456 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.456 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.456 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.456 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.456 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.714 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.714 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:18.714 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:18.714 [252/268] Linking target lib/librte_net.so.24.1 00:02:18.714 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:18.972 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.972 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.972 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:18.972 [257/268] Linking target lib/librte_security.so.24.1 00:02:18.972 [258/268] Linking target lib/librte_hash.so.24.1 00:02:18.972 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:19.537 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:20.912 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.912 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:20.912 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.912 [264/268] Linking target lib/librte_power.so.24.1 00:02:47.445 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.445 [266/268] Linking static target lib/librte_vhost.a 00:02:47.445 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.445 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:47.445 INFO: autodetecting backend as ninja 00:02:47.445 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:47.703 CC lib/ut_mock/mock.o 00:02:47.703 CC lib/ut/ut.o 00:02:47.703 CC lib/log/log.o 00:02:47.703 CC lib/log/log_flags.o 00:02:47.703 CC lib/log/log_deprecated.o 00:02:47.961 LIB libspdk_ut.a 00:02:47.961 LIB libspdk_ut_mock.a 00:02:47.961 LIB libspdk_log.a 00:02:47.961 SO libspdk_ut.so.2.0 00:02:47.961 SO libspdk_ut_mock.so.6.0 00:02:47.961 SO libspdk_log.so.7.1 00:02:47.961 SYMLINK libspdk_ut_mock.so 00:02:47.961 SYMLINK libspdk_ut.so 00:02:47.961 SYMLINK libspdk_log.so 00:02:48.220 CXX lib/trace_parser/trace.o 00:02:48.220 CC lib/ioat/ioat.o 00:02:48.220 CC lib/dma/dma.o 00:02:48.220 CC lib/util/base64.o 00:02:48.220 CC lib/util/bit_array.o 00:02:48.220 CC lib/util/cpuset.o 00:02:48.220 CC lib/util/crc16.o 00:02:48.220 CC lib/util/crc32.o 00:02:48.220 CC lib/util/crc32c.o 00:02:48.220 CC lib/util/crc32_ieee.o 00:02:48.220 CC lib/util/crc64.o 00:02:48.220 CC lib/util/dif.o 00:02:48.220 CC lib/util/fd.o 00:02:48.220 CC lib/util/fd_group.o 00:02:48.220 CC lib/util/file.o 00:02:48.220 CC lib/util/hexlify.o 00:02:48.220 CC lib/util/iov.o 00:02:48.220 CC lib/util/math.o 00:02:48.220 CC lib/util/net.o 00:02:48.220 CC lib/util/pipe.o 00:02:48.220 CC lib/util/strerror_tls.o 00:02:48.220 CC lib/util/string.o 00:02:48.220 CC lib/util/uuid.o 00:02:48.220 CC lib/util/xor.o 00:02:48.220 CC lib/util/md5.o 00:02:48.220 CC lib/util/zipf.o 00:02:48.220 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.220 CC lib/vfio_user/host/vfio_user.o 00:02:48.479 LIB libspdk_dma.a 00:02:48.479 LIB libspdk_ioat.a 00:02:48.479 SO libspdk_dma.so.5.0 00:02:48.479 SO libspdk_ioat.so.7.0 00:02:48.479 LIB libspdk_vfio_user.a 00:02:48.479 SYMLINK libspdk_dma.so 00:02:48.479 SYMLINK libspdk_ioat.so 00:02:48.479 SO libspdk_vfio_user.so.5.0 00:02:48.737 SYMLINK libspdk_vfio_user.so 00:02:48.996 LIB libspdk_util.a 00:02:48.996 SO libspdk_util.so.10.1 00:02:48.996 SYMLINK libspdk_util.so 00:02:49.254 CC lib/conf/conf.o 00:02:49.254 CC lib/vmd/vmd.o 00:02:49.254 CC lib/idxd/idxd.o 00:02:49.254 CC lib/env_dpdk/env.o 00:02:49.254 CC lib/vmd/led.o 00:02:49.254 CC lib/idxd/idxd_user.o 00:02:49.254 CC lib/env_dpdk/memory.o 00:02:49.254 CC lib/rdma_utils/rdma_utils.o 00:02:49.254 CC lib/idxd/idxd_kernel.o 00:02:49.254 CC lib/json/json_parse.o 00:02:49.254 CC lib/env_dpdk/pci.o 00:02:49.254 CC lib/json/json_util.o 00:02:49.254 CC lib/env_dpdk/init.o 00:02:49.254 CC lib/json/json_write.o 00:02:49.254 CC lib/env_dpdk/threads.o 00:02:49.254 CC lib/env_dpdk/pci_ioat.o 00:02:49.254 CC lib/env_dpdk/pci_virtio.o 00:02:49.254 CC lib/env_dpdk/pci_vmd.o 00:02:49.254 CC lib/env_dpdk/pci_idxd.o 00:02:49.254 CC lib/env_dpdk/pci_event.o 00:02:49.254 CC lib/env_dpdk/pci_dpdk.o 00:02:49.254 CC lib/env_dpdk/sigbus_handler.o 00:02:49.254 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.254 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.254 LIB libspdk_trace_parser.a 00:02:49.254 SO libspdk_trace_parser.so.6.0 00:02:49.513 SYMLINK libspdk_trace_parser.so 00:02:49.513 LIB libspdk_conf.a 00:02:49.513 SO libspdk_conf.so.6.0 00:02:49.513 LIB libspdk_rdma_utils.a 00:02:49.513 SYMLINK libspdk_conf.so 00:02:49.513 LIB libspdk_json.a 00:02:49.513 SO libspdk_rdma_utils.so.1.0 00:02:49.772 SO libspdk_json.so.6.0 00:02:49.772 SYMLINK libspdk_rdma_utils.so 00:02:49.772 SYMLINK libspdk_json.so 00:02:49.772 CC lib/rdma_provider/common.o 00:02:49.772 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.772 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.772 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.772 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.772 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.031 LIB libspdk_idxd.a 00:02:50.031 LIB libspdk_rdma_provider.a 00:02:50.031 SO libspdk_rdma_provider.so.7.0 00:02:50.031 SO libspdk_idxd.so.12.1 00:02:50.289 LIB libspdk_vmd.a 00:02:50.289 SYMLINK libspdk_rdma_provider.so 00:02:50.289 LIB libspdk_jsonrpc.a 00:02:50.289 SO libspdk_vmd.so.6.0 00:02:50.289 SYMLINK libspdk_idxd.so 00:02:50.289 SO libspdk_jsonrpc.so.6.0 00:02:50.289 SYMLINK libspdk_vmd.so 00:02:50.289 SYMLINK libspdk_jsonrpc.so 00:02:50.548 CC lib/rpc/rpc.o 00:02:50.806 LIB libspdk_rpc.a 00:02:50.806 SO libspdk_rpc.so.6.0 00:02:50.806 SYMLINK libspdk_rpc.so 00:02:50.806 CC lib/notify/notify.o 00:02:50.806 CC lib/notify/notify_rpc.o 00:02:50.806 CC lib/trace/trace.o 00:02:50.806 CC lib/keyring/keyring.o 00:02:50.806 CC lib/keyring/keyring_rpc.o 00:02:50.806 CC lib/trace/trace_flags.o 00:02:50.806 CC lib/trace/trace_rpc.o 00:02:51.065 LIB libspdk_notify.a 00:02:51.065 SO libspdk_notify.so.6.0 00:02:51.065 SYMLINK libspdk_notify.so 00:02:51.065 LIB libspdk_keyring.a 00:02:51.323 SO libspdk_keyring.so.2.0 00:02:51.323 LIB libspdk_trace.a 00:02:51.323 SO libspdk_trace.so.11.0 00:02:51.323 SYMLINK libspdk_keyring.so 00:02:51.323 SYMLINK libspdk_trace.so 00:02:51.581 CC lib/sock/sock.o 00:02:51.581 CC lib/sock/sock_rpc.o 00:02:51.581 CC lib/thread/thread.o 00:02:51.581 CC lib/thread/iobuf.o 00:02:51.839 LIB libspdk_sock.a 00:02:52.097 SO libspdk_sock.so.10.0 00:02:52.097 SYMLINK libspdk_sock.so 00:02:52.097 LIB libspdk_env_dpdk.a 00:02:52.097 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.097 CC lib/nvme/nvme_ctrlr.o 00:02:52.097 CC lib/nvme/nvme_fabric.o 00:02:52.097 CC lib/nvme/nvme_ns_cmd.o 00:02:52.097 CC lib/nvme/nvme_ns.o 00:02:52.097 CC lib/nvme/nvme_pcie_common.o 00:02:52.097 CC lib/nvme/nvme_pcie.o 00:02:52.097 CC lib/nvme/nvme_qpair.o 00:02:52.097 CC lib/nvme/nvme.o 00:02:52.097 CC lib/nvme/nvme_quirks.o 00:02:52.097 CC lib/nvme/nvme_transport.o 00:02:52.097 CC lib/nvme/nvme_discovery.o 00:02:52.097 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.097 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.097 CC lib/nvme/nvme_tcp.o 00:02:52.097 CC lib/nvme/nvme_opal.o 00:02:52.097 CC lib/nvme/nvme_io_msg.o 00:02:52.097 CC lib/nvme/nvme_poll_group.o 00:02:52.097 CC lib/nvme/nvme_zns.o 00:02:52.097 CC lib/nvme/nvme_stubs.o 00:02:52.097 CC lib/nvme/nvme_auth.o 00:02:52.097 CC lib/nvme/nvme_rdma.o 00:02:52.097 CC lib/nvme/nvme_cuse.o 00:02:52.356 SO libspdk_env_dpdk.so.15.1 00:02:52.615 SYMLINK libspdk_env_dpdk.so 00:02:53.550 LIB libspdk_thread.a 00:02:53.550 SO libspdk_thread.so.11.0 00:02:53.550 SYMLINK libspdk_thread.so 00:02:53.808 CC lib/virtio/virtio.o 00:02:53.808 CC lib/fsdev/fsdev.o 00:02:53.808 CC lib/accel/accel.o 00:02:53.808 CC lib/init/json_config.o 00:02:53.808 CC lib/blob/blobstore.o 00:02:53.808 CC lib/virtio/virtio_vhost_user.o 00:02:53.808 CC lib/fsdev/fsdev_io.o 00:02:53.808 CC lib/blob/request.o 00:02:53.808 CC lib/accel/accel_rpc.o 00:02:53.808 CC lib/init/subsystem.o 00:02:53.808 CC lib/virtio/virtio_vfio_user.o 00:02:53.808 CC lib/fsdev/fsdev_rpc.o 00:02:53.808 CC lib/blob/zeroes.o 00:02:53.808 CC lib/virtio/virtio_pci.o 00:02:53.808 CC lib/accel/accel_sw.o 00:02:53.808 CC lib/init/subsystem_rpc.o 00:02:53.808 CC lib/blob/blob_bs_dev.o 00:02:53.808 CC lib/init/rpc.o 00:02:54.066 LIB libspdk_init.a 00:02:54.066 SO libspdk_init.so.6.0 00:02:54.066 SYMLINK libspdk_init.so 00:02:54.325 LIB libspdk_virtio.a 00:02:54.325 SO libspdk_virtio.so.7.0 00:02:54.325 SYMLINK libspdk_virtio.so 00:02:54.325 CC lib/event/app.o 00:02:54.325 CC lib/event/reactor.o 00:02:54.325 CC lib/event/log_rpc.o 00:02:54.325 CC lib/event/app_rpc.o 00:02:54.325 CC lib/event/scheduler_static.o 00:02:54.583 LIB libspdk_fsdev.a 00:02:54.583 SO libspdk_fsdev.so.2.0 00:02:54.840 SYMLINK libspdk_fsdev.so 00:02:54.840 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:55.098 LIB libspdk_event.a 00:02:55.098 SO libspdk_event.so.14.0 00:02:55.098 SYMLINK libspdk_event.so 00:02:55.356 LIB libspdk_nvme.a 00:02:55.356 LIB libspdk_accel.a 00:02:55.356 SO libspdk_accel.so.16.0 00:02:55.356 SO libspdk_nvme.so.15.0 00:02:55.356 SYMLINK libspdk_accel.so 00:02:55.614 CC lib/bdev/bdev.o 00:02:55.614 CC lib/bdev/bdev_rpc.o 00:02:55.614 CC lib/bdev/bdev_zone.o 00:02:55.614 CC lib/bdev/part.o 00:02:55.614 CC lib/bdev/scsi_nvme.o 00:02:55.614 SYMLINK libspdk_nvme.so 00:02:55.872 LIB libspdk_fuse_dispatcher.a 00:02:55.872 SO libspdk_fuse_dispatcher.so.1.0 00:02:55.872 SYMLINK libspdk_fuse_dispatcher.so 00:02:58.401 LIB libspdk_blob.a 00:02:58.401 SO libspdk_blob.so.11.0 00:02:58.401 SYMLINK libspdk_blob.so 00:02:58.401 CC lib/blobfs/blobfs.o 00:02:58.401 CC lib/blobfs/tree.o 00:02:58.401 CC lib/lvol/lvol.o 00:02:58.967 LIB libspdk_bdev.a 00:02:58.967 SO libspdk_bdev.so.17.0 00:02:58.967 SYMLINK libspdk_bdev.so 00:02:59.231 CC lib/ublk/ublk.o 00:02:59.231 CC lib/scsi/dev.o 00:02:59.231 CC lib/ublk/ublk_rpc.o 00:02:59.231 CC lib/nvmf/ctrlr.o 00:02:59.231 CC lib/nbd/nbd.o 00:02:59.231 CC lib/ftl/ftl_core.o 00:02:59.231 CC lib/scsi/lun.o 00:02:59.231 CC lib/nbd/nbd_rpc.o 00:02:59.231 CC lib/ftl/ftl_init.o 00:02:59.231 CC lib/scsi/port.o 00:02:59.231 CC lib/nvmf/ctrlr_discovery.o 00:02:59.231 CC lib/ftl/ftl_layout.o 00:02:59.231 CC lib/nvmf/ctrlr_bdev.o 00:02:59.231 CC lib/scsi/scsi.o 00:02:59.231 CC lib/ftl/ftl_debug.o 00:02:59.231 CC lib/nvmf/subsystem.o 00:02:59.231 CC lib/scsi/scsi_bdev.o 00:02:59.231 CC lib/ftl/ftl_io.o 00:02:59.231 CC lib/ftl/ftl_sb.o 00:02:59.231 CC lib/scsi/scsi_pr.o 00:02:59.231 CC lib/nvmf/nvmf.o 00:02:59.231 CC lib/nvmf/nvmf_rpc.o 00:02:59.231 CC lib/scsi/scsi_rpc.o 00:02:59.231 CC lib/ftl/ftl_l2p.o 00:02:59.231 CC lib/nvmf/transport.o 00:02:59.231 CC lib/ftl/ftl_l2p_flat.o 00:02:59.231 CC lib/ftl/ftl_nv_cache.o 00:02:59.231 CC lib/scsi/task.o 00:02:59.231 CC lib/nvmf/tcp.o 00:02:59.231 CC lib/nvmf/stubs.o 00:02:59.231 CC lib/ftl/ftl_band.o 00:02:59.231 CC lib/nvmf/mdns_server.o 00:02:59.231 CC lib/ftl/ftl_band_ops.o 00:02:59.231 CC lib/ftl/ftl_writer.o 00:02:59.231 CC lib/nvmf/rdma.o 00:02:59.231 CC lib/ftl/ftl_rq.o 00:02:59.231 CC lib/nvmf/auth.o 00:02:59.231 CC lib/ftl/ftl_reloc.o 00:02:59.231 CC lib/ftl/ftl_l2p_cache.o 00:02:59.231 CC lib/ftl/ftl_p2l.o 00:02:59.231 CC lib/ftl/ftl_p2l_log.o 00:02:59.231 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.231 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.231 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:59.231 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:59.231 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:59.490 LIB libspdk_blobfs.a 00:02:59.490 SO libspdk_blobfs.so.10.0 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:59.748 SYMLINK libspdk_blobfs.so 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:59.748 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:59.748 CC lib/ftl/utils/ftl_conf.o 00:02:59.748 CC lib/ftl/utils/ftl_md.o 00:02:59.748 CC lib/ftl/utils/ftl_mempool.o 00:02:59.748 LIB libspdk_lvol.a 00:02:59.748 CC lib/ftl/utils/ftl_bitmap.o 00:02:59.748 CC lib/ftl/utils/ftl_property.o 00:02:59.748 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:59.748 SO libspdk_lvol.so.10.0 00:02:59.748 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:59.748 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:59.748 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:00.007 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:00.007 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:00.007 SYMLINK libspdk_lvol.so 00:03:00.007 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:00.007 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:00.007 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:00.007 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:00.007 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:00.007 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:00.007 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:00.007 CC lib/ftl/base/ftl_base_dev.o 00:03:00.007 CC lib/ftl/base/ftl_base_bdev.o 00:03:00.007 CC lib/ftl/ftl_trace.o 00:03:00.267 LIB libspdk_nbd.a 00:03:00.267 SO libspdk_nbd.so.7.0 00:03:00.267 SYMLINK libspdk_nbd.so 00:03:00.525 LIB libspdk_scsi.a 00:03:00.525 SO libspdk_scsi.so.9.0 00:03:00.525 LIB libspdk_ublk.a 00:03:00.525 SO libspdk_ublk.so.3.0 00:03:00.525 SYMLINK libspdk_scsi.so 00:03:00.783 SYMLINK libspdk_ublk.so 00:03:00.783 CC lib/iscsi/conn.o 00:03:00.783 CC lib/vhost/vhost.o 00:03:00.783 CC lib/vhost/vhost_rpc.o 00:03:00.783 CC lib/iscsi/init_grp.o 00:03:00.783 CC lib/vhost/vhost_scsi.o 00:03:00.783 CC lib/iscsi/iscsi.o 00:03:00.783 CC lib/vhost/vhost_blk.o 00:03:00.783 CC lib/iscsi/param.o 00:03:00.783 CC lib/vhost/rte_vhost_user.o 00:03:00.783 CC lib/iscsi/portal_grp.o 00:03:00.783 CC lib/iscsi/tgt_node.o 00:03:00.783 CC lib/iscsi/iscsi_subsystem.o 00:03:00.783 CC lib/iscsi/iscsi_rpc.o 00:03:00.783 CC lib/iscsi/task.o 00:03:01.042 LIB libspdk_ftl.a 00:03:01.300 SO libspdk_ftl.so.9.0 00:03:01.558 SYMLINK libspdk_ftl.so 00:03:02.125 LIB libspdk_vhost.a 00:03:02.125 SO libspdk_vhost.so.8.0 00:03:02.383 SYMLINK libspdk_vhost.so 00:03:02.641 LIB libspdk_iscsi.a 00:03:02.641 SO libspdk_iscsi.so.8.0 00:03:02.900 LIB libspdk_nvmf.a 00:03:02.900 SYMLINK libspdk_iscsi.so 00:03:02.900 SO libspdk_nvmf.so.20.0 00:03:03.158 SYMLINK libspdk_nvmf.so 00:03:03.417 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.417 CC module/accel/error/accel_error.o 00:03:03.417 CC module/accel/error/accel_error_rpc.o 00:03:03.417 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.417 CC module/sock/posix/posix.o 00:03:03.417 CC module/accel/ioat/accel_ioat.o 00:03:03.417 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.417 CC module/accel/iaa/accel_iaa.o 00:03:03.417 CC module/keyring/linux/keyring.o 00:03:03.417 CC module/fsdev/aio/fsdev_aio.o 00:03:03.417 CC module/keyring/file/keyring.o 00:03:03.417 CC module/accel/iaa/accel_iaa_rpc.o 00:03:03.417 CC module/accel/ioat/accel_ioat_rpc.o 00:03:03.417 CC module/keyring/linux/keyring_rpc.o 00:03:03.417 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:03.417 CC module/keyring/file/keyring_rpc.o 00:03:03.417 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.417 CC module/accel/dsa/accel_dsa.o 00:03:03.417 CC module/fsdev/aio/linux_aio_mgr.o 00:03:03.417 CC module/blob/bdev/blob_bdev.o 00:03:03.417 CC module/accel/dsa/accel_dsa_rpc.o 00:03:03.417 LIB libspdk_env_dpdk_rpc.a 00:03:03.674 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.674 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.674 LIB libspdk_keyring_linux.a 00:03:03.674 LIB libspdk_keyring_file.a 00:03:03.674 LIB libspdk_scheduler_gscheduler.a 00:03:03.674 LIB libspdk_scheduler_dpdk_governor.a 00:03:03.674 SO libspdk_keyring_linux.so.1.0 00:03:03.674 SO libspdk_keyring_file.so.2.0 00:03:03.674 SO libspdk_scheduler_gscheduler.so.4.0 00:03:03.674 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:03.674 LIB libspdk_accel_ioat.a 00:03:03.674 LIB libspdk_scheduler_dynamic.a 00:03:03.674 LIB libspdk_accel_error.a 00:03:03.674 SYMLINK libspdk_keyring_linux.so 00:03:03.674 SO libspdk_accel_ioat.so.6.0 00:03:03.674 SYMLINK libspdk_scheduler_gscheduler.so 00:03:03.674 LIB libspdk_accel_iaa.a 00:03:03.674 SYMLINK libspdk_keyring_file.so 00:03:03.674 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:03.674 SO libspdk_scheduler_dynamic.so.4.0 00:03:03.674 SO libspdk_accel_error.so.2.0 00:03:03.674 SO libspdk_accel_iaa.so.3.0 00:03:03.674 SYMLINK libspdk_accel_ioat.so 00:03:03.932 SYMLINK libspdk_scheduler_dynamic.so 00:03:03.932 SYMLINK libspdk_accel_error.so 00:03:03.932 SYMLINK libspdk_accel_iaa.so 00:03:03.932 LIB libspdk_blob_bdev.a 00:03:03.932 LIB libspdk_accel_dsa.a 00:03:03.932 SO libspdk_blob_bdev.so.11.0 00:03:03.932 SO libspdk_accel_dsa.so.5.0 00:03:03.932 SYMLINK libspdk_blob_bdev.so 00:03:03.932 SYMLINK libspdk_accel_dsa.so 00:03:04.191 CC module/bdev/gpt/gpt.o 00:03:04.191 CC module/bdev/malloc/bdev_malloc.o 00:03:04.191 CC module/bdev/error/vbdev_error.o 00:03:04.191 CC module/bdev/delay/vbdev_delay.o 00:03:04.191 CC module/bdev/nvme/bdev_nvme.o 00:03:04.191 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.191 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.191 CC module/bdev/null/bdev_null.o 00:03:04.191 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.191 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.191 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.191 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.191 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.191 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.191 CC module/bdev/null/bdev_null_rpc.o 00:03:04.191 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.191 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.191 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.191 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.191 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.191 CC module/bdev/split/vbdev_split.o 00:03:04.191 CC module/bdev/nvme/nvme_rpc.o 00:03:04.191 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.191 CC module/bdev/raid/bdev_raid.o 00:03:04.191 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.191 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.191 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.191 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.191 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.191 CC module/bdev/nvme/vbdev_opal.o 00:03:04.191 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.191 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.191 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.191 CC module/bdev/raid/raid0.o 00:03:04.191 CC module/bdev/aio/bdev_aio.o 00:03:04.191 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.191 CC module/bdev/raid/raid1.o 00:03:04.191 CC module/bdev/ftl/bdev_ftl.o 00:03:04.191 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.191 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.191 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.191 CC module/bdev/raid/concat.o 00:03:04.450 LIB libspdk_blobfs_bdev.a 00:03:04.709 SO libspdk_blobfs_bdev.so.6.0 00:03:04.709 LIB libspdk_fsdev_aio.a 00:03:04.709 LIB libspdk_bdev_split.a 00:03:04.709 SO libspdk_fsdev_aio.so.1.0 00:03:04.709 LIB libspdk_sock_posix.a 00:03:04.709 SO libspdk_bdev_split.so.6.0 00:03:04.709 LIB libspdk_bdev_error.a 00:03:04.709 LIB libspdk_bdev_passthru.a 00:03:04.709 SO libspdk_sock_posix.so.6.0 00:03:04.709 SYMLINK libspdk_blobfs_bdev.so 00:03:04.709 SO libspdk_bdev_error.so.6.0 00:03:04.709 SO libspdk_bdev_passthru.so.6.0 00:03:04.709 LIB libspdk_bdev_gpt.a 00:03:04.709 SYMLINK libspdk_fsdev_aio.so 00:03:04.709 SO libspdk_bdev_gpt.so.6.0 00:03:04.709 SYMLINK libspdk_bdev_split.so 00:03:04.709 LIB libspdk_bdev_ftl.a 00:03:04.709 SYMLINK libspdk_bdev_error.so 00:03:04.709 SYMLINK libspdk_bdev_passthru.so 00:03:04.709 SO libspdk_bdev_ftl.so.6.0 00:03:04.709 SYMLINK libspdk_sock_posix.so 00:03:04.709 LIB libspdk_bdev_null.a 00:03:04.709 SYMLINK libspdk_bdev_gpt.so 00:03:04.709 LIB libspdk_bdev_iscsi.a 00:03:04.709 SO libspdk_bdev_null.so.6.0 00:03:04.709 SO libspdk_bdev_iscsi.so.6.0 00:03:04.709 SYMLINK libspdk_bdev_ftl.so 00:03:04.709 LIB libspdk_bdev_aio.a 00:03:04.709 LIB libspdk_bdev_malloc.a 00:03:04.967 SO libspdk_bdev_aio.so.6.0 00:03:04.967 SYMLINK libspdk_bdev_null.so 00:03:04.967 LIB libspdk_bdev_delay.a 00:03:04.967 SO libspdk_bdev_malloc.so.6.0 00:03:04.967 LIB libspdk_bdev_zone_block.a 00:03:04.967 SYMLINK libspdk_bdev_iscsi.so 00:03:04.967 SO libspdk_bdev_delay.so.6.0 00:03:04.967 SO libspdk_bdev_zone_block.so.6.0 00:03:04.967 SYMLINK libspdk_bdev_aio.so 00:03:04.967 SYMLINK libspdk_bdev_malloc.so 00:03:04.967 SYMLINK libspdk_bdev_zone_block.so 00:03:04.967 SYMLINK libspdk_bdev_delay.so 00:03:04.967 LIB libspdk_bdev_lvol.a 00:03:05.225 SO libspdk_bdev_lvol.so.6.0 00:03:05.225 LIB libspdk_bdev_virtio.a 00:03:05.225 SO libspdk_bdev_virtio.so.6.0 00:03:05.225 SYMLINK libspdk_bdev_lvol.so 00:03:05.225 SYMLINK libspdk_bdev_virtio.so 00:03:05.791 LIB libspdk_bdev_raid.a 00:03:05.791 SO libspdk_bdev_raid.so.6.0 00:03:05.791 SYMLINK libspdk_bdev_raid.so 00:03:08.321 LIB libspdk_bdev_nvme.a 00:03:08.321 SO libspdk_bdev_nvme.so.7.1 00:03:08.321 SYMLINK libspdk_bdev_nvme.so 00:03:08.579 CC module/event/subsystems/vmd/vmd.o 00:03:08.579 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.579 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.579 CC module/event/subsystems/fsdev/fsdev.o 00:03:08.579 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.580 CC module/event/subsystems/sock/sock.o 00:03:08.580 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.580 CC module/event/subsystems/keyring/keyring.o 00:03:08.580 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.839 LIB libspdk_event_keyring.a 00:03:08.839 LIB libspdk_event_vhost_blk.a 00:03:08.839 LIB libspdk_event_fsdev.a 00:03:08.839 LIB libspdk_event_scheduler.a 00:03:08.839 LIB libspdk_event_sock.a 00:03:08.839 LIB libspdk_event_vmd.a 00:03:08.839 SO libspdk_event_keyring.so.1.0 00:03:08.839 SO libspdk_event_vhost_blk.so.3.0 00:03:08.839 SO libspdk_event_fsdev.so.1.0 00:03:08.839 SO libspdk_event_scheduler.so.4.0 00:03:08.839 LIB libspdk_event_iobuf.a 00:03:08.839 SO libspdk_event_sock.so.5.0 00:03:08.839 SO libspdk_event_vmd.so.6.0 00:03:08.839 SO libspdk_event_iobuf.so.3.0 00:03:08.839 SYMLINK libspdk_event_keyring.so 00:03:08.839 SYMLINK libspdk_event_fsdev.so 00:03:08.839 SYMLINK libspdk_event_vhost_blk.so 00:03:08.839 SYMLINK libspdk_event_sock.so 00:03:08.839 SYMLINK libspdk_event_scheduler.so 00:03:08.839 SYMLINK libspdk_event_vmd.so 00:03:08.839 SYMLINK libspdk_event_iobuf.so 00:03:09.097 CC module/event/subsystems/accel/accel.o 00:03:09.097 LIB libspdk_event_accel.a 00:03:09.355 SO libspdk_event_accel.so.6.0 00:03:09.355 SYMLINK libspdk_event_accel.so 00:03:09.355 CC module/event/subsystems/bdev/bdev.o 00:03:09.614 LIB libspdk_event_bdev.a 00:03:09.614 SO libspdk_event_bdev.so.6.0 00:03:09.614 SYMLINK libspdk_event_bdev.so 00:03:09.872 CC module/event/subsystems/scsi/scsi.o 00:03:09.872 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.872 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.872 CC module/event/subsystems/nbd/nbd.o 00:03:09.872 CC module/event/subsystems/ublk/ublk.o 00:03:10.131 LIB libspdk_event_nbd.a 00:03:10.131 LIB libspdk_event_ublk.a 00:03:10.131 LIB libspdk_event_scsi.a 00:03:10.131 SO libspdk_event_nbd.so.6.0 00:03:10.131 SO libspdk_event_ublk.so.3.0 00:03:10.131 SO libspdk_event_scsi.so.6.0 00:03:10.131 SYMLINK libspdk_event_nbd.so 00:03:10.131 SYMLINK libspdk_event_ublk.so 00:03:10.131 SYMLINK libspdk_event_scsi.so 00:03:10.131 LIB libspdk_event_nvmf.a 00:03:10.131 SO libspdk_event_nvmf.so.6.0 00:03:10.131 SYMLINK libspdk_event_nvmf.so 00:03:10.389 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.389 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.389 LIB libspdk_event_vhost_scsi.a 00:03:10.389 LIB libspdk_event_iscsi.a 00:03:10.389 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.389 SO libspdk_event_iscsi.so.6.0 00:03:10.648 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.648 SYMLINK libspdk_event_iscsi.so 00:03:10.648 SO libspdk.so.6.0 00:03:10.648 SYMLINK libspdk.so 00:03:10.913 CXX app/trace/trace.o 00:03:10.913 CC app/spdk_lspci/spdk_lspci.o 00:03:10.913 CC app/spdk_nvme_identify/identify.o 00:03:10.913 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.913 CC app/spdk_top/spdk_top.o 00:03:10.913 CC app/trace_record/trace_record.o 00:03:10.913 CC test/rpc_client/rpc_client_test.o 00:03:10.913 CC app/spdk_nvme_perf/perf.o 00:03:10.913 TEST_HEADER include/spdk/accel.h 00:03:10.913 TEST_HEADER include/spdk/assert.h 00:03:10.913 TEST_HEADER include/spdk/accel_module.h 00:03:10.913 TEST_HEADER include/spdk/barrier.h 00:03:10.913 TEST_HEADER include/spdk/base64.h 00:03:10.913 TEST_HEADER include/spdk/bdev.h 00:03:10.913 TEST_HEADER include/spdk/bdev_module.h 00:03:10.913 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.913 TEST_HEADER include/spdk/bit_array.h 00:03:10.913 TEST_HEADER include/spdk/bit_pool.h 00:03:10.913 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.913 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.913 TEST_HEADER include/spdk/blobfs.h 00:03:10.913 TEST_HEADER include/spdk/blob.h 00:03:10.913 TEST_HEADER include/spdk/conf.h 00:03:10.913 TEST_HEADER include/spdk/cpuset.h 00:03:10.913 TEST_HEADER include/spdk/config.h 00:03:10.913 TEST_HEADER include/spdk/crc16.h 00:03:10.913 TEST_HEADER include/spdk/crc32.h 00:03:10.913 TEST_HEADER include/spdk/crc64.h 00:03:10.913 TEST_HEADER include/spdk/dma.h 00:03:10.913 TEST_HEADER include/spdk/dif.h 00:03:10.913 TEST_HEADER include/spdk/endian.h 00:03:10.913 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.913 TEST_HEADER include/spdk/env.h 00:03:10.913 TEST_HEADER include/spdk/event.h 00:03:10.913 TEST_HEADER include/spdk/fd_group.h 00:03:10.913 TEST_HEADER include/spdk/fd.h 00:03:10.913 TEST_HEADER include/spdk/file.h 00:03:10.913 TEST_HEADER include/spdk/fsdev.h 00:03:10.913 TEST_HEADER include/spdk/fsdev_module.h 00:03:10.913 TEST_HEADER include/spdk/ftl.h 00:03:10.913 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:10.913 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.913 TEST_HEADER include/spdk/hexlify.h 00:03:10.913 TEST_HEADER include/spdk/histogram_data.h 00:03:10.913 TEST_HEADER include/spdk/idxd.h 00:03:10.913 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.913 TEST_HEADER include/spdk/init.h 00:03:10.913 TEST_HEADER include/spdk/ioat.h 00:03:10.913 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.913 TEST_HEADER include/spdk/json.h 00:03:10.913 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.913 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.913 TEST_HEADER include/spdk/keyring.h 00:03:10.913 TEST_HEADER include/spdk/keyring_module.h 00:03:10.913 TEST_HEADER include/spdk/log.h 00:03:10.913 TEST_HEADER include/spdk/likely.h 00:03:10.913 TEST_HEADER include/spdk/lvol.h 00:03:10.913 TEST_HEADER include/spdk/md5.h 00:03:10.913 TEST_HEADER include/spdk/memory.h 00:03:10.913 TEST_HEADER include/spdk/mmio.h 00:03:10.913 TEST_HEADER include/spdk/nbd.h 00:03:10.913 TEST_HEADER include/spdk/net.h 00:03:10.913 TEST_HEADER include/spdk/notify.h 00:03:10.913 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.913 TEST_HEADER include/spdk/nvme.h 00:03:10.913 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.913 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.913 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.913 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.913 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.913 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.913 TEST_HEADER include/spdk/nvmf.h 00:03:10.913 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.913 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.913 TEST_HEADER include/spdk/opal.h 00:03:10.913 TEST_HEADER include/spdk/opal_spec.h 00:03:10.913 TEST_HEADER include/spdk/pci_ids.h 00:03:10.913 TEST_HEADER include/spdk/pipe.h 00:03:10.913 TEST_HEADER include/spdk/queue.h 00:03:10.913 TEST_HEADER include/spdk/reduce.h 00:03:10.913 TEST_HEADER include/spdk/rpc.h 00:03:10.913 TEST_HEADER include/spdk/scheduler.h 00:03:10.913 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.913 TEST_HEADER include/spdk/scsi.h 00:03:10.913 TEST_HEADER include/spdk/sock.h 00:03:10.913 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.913 TEST_HEADER include/spdk/stdinc.h 00:03:10.913 TEST_HEADER include/spdk/string.h 00:03:10.913 TEST_HEADER include/spdk/thread.h 00:03:10.913 TEST_HEADER include/spdk/trace.h 00:03:10.913 TEST_HEADER include/spdk/tree.h 00:03:10.913 TEST_HEADER include/spdk/trace_parser.h 00:03:10.913 TEST_HEADER include/spdk/ublk.h 00:03:10.913 TEST_HEADER include/spdk/util.h 00:03:10.913 TEST_HEADER include/spdk/uuid.h 00:03:10.913 TEST_HEADER include/spdk/version.h 00:03:10.913 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.913 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.913 TEST_HEADER include/spdk/vhost.h 00:03:10.913 CC app/spdk_dd/spdk_dd.o 00:03:10.913 TEST_HEADER include/spdk/vmd.h 00:03:10.913 TEST_HEADER include/spdk/xor.h 00:03:10.913 TEST_HEADER include/spdk/zipf.h 00:03:10.913 CXX test/cpp_headers/accel.o 00:03:10.913 CXX test/cpp_headers/accel_module.o 00:03:10.913 CXX test/cpp_headers/assert.o 00:03:10.913 CXX test/cpp_headers/barrier.o 00:03:10.913 CXX test/cpp_headers/base64.o 00:03:10.913 CXX test/cpp_headers/bdev.o 00:03:10.913 CXX test/cpp_headers/bdev_module.o 00:03:10.913 CXX test/cpp_headers/bdev_zone.o 00:03:10.913 CXX test/cpp_headers/bit_array.o 00:03:10.913 CXX test/cpp_headers/bit_pool.o 00:03:10.913 CXX test/cpp_headers/blob_bdev.o 00:03:10.913 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.913 CXX test/cpp_headers/blobfs.o 00:03:10.913 CXX test/cpp_headers/blob.o 00:03:10.913 CXX test/cpp_headers/conf.o 00:03:10.913 CC app/nvmf_tgt/nvmf_main.o 00:03:10.913 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.913 CXX test/cpp_headers/config.o 00:03:10.913 CXX test/cpp_headers/cpuset.o 00:03:10.913 CXX test/cpp_headers/crc16.o 00:03:10.913 CC app/spdk_tgt/spdk_tgt.o 00:03:10.913 CC examples/ioat/perf/perf.o 00:03:10.913 CC examples/ioat/verify/verify.o 00:03:10.913 CXX test/cpp_headers/crc32.o 00:03:10.913 CC examples/util/zipf/zipf.o 00:03:10.913 CC test/thread/poller_perf/poller_perf.o 00:03:10.913 CC test/env/vtophys/vtophys.o 00:03:10.913 CC app/fio/nvme/fio_plugin.o 00:03:10.913 CC test/env/pci/pci_ut.o 00:03:10.913 CC test/app/stub/stub.o 00:03:10.913 CC test/env/memory/memory_ut.o 00:03:10.913 CC test/app/histogram_perf/histogram_perf.o 00:03:10.913 CC test/app/jsoncat/jsoncat.o 00:03:10.913 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:10.913 CC app/fio/bdev/fio_plugin.o 00:03:11.174 CC test/app/bdev_svc/bdev_svc.o 00:03:11.174 CC test/dma/test_dma/test_dma.o 00:03:11.174 LINK spdk_lspci 00:03:11.174 CC test/env/mem_callbacks/mem_callbacks.o 00:03:11.174 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.174 LINK rpc_client_test 00:03:11.435 LINK spdk_nvme_discover 00:03:11.435 LINK jsoncat 00:03:11.435 LINK poller_perf 00:03:11.435 LINK interrupt_tgt 00:03:11.435 LINK zipf 00:03:11.435 CXX test/cpp_headers/crc64.o 00:03:11.435 LINK vtophys 00:03:11.435 CXX test/cpp_headers/dif.o 00:03:11.435 LINK nvmf_tgt 00:03:11.435 LINK env_dpdk_post_init 00:03:11.435 CXX test/cpp_headers/dma.o 00:03:11.435 LINK histogram_perf 00:03:11.435 CXX test/cpp_headers/endian.o 00:03:11.435 CXX test/cpp_headers/env_dpdk.o 00:03:11.435 CXX test/cpp_headers/env.o 00:03:11.435 CXX test/cpp_headers/event.o 00:03:11.435 CXX test/cpp_headers/fd_group.o 00:03:11.435 CXX test/cpp_headers/fd.o 00:03:11.435 CXX test/cpp_headers/file.o 00:03:11.435 CXX test/cpp_headers/fsdev.o 00:03:11.435 LINK stub 00:03:11.435 LINK iscsi_tgt 00:03:11.435 CXX test/cpp_headers/fsdev_module.o 00:03:11.435 CXX test/cpp_headers/ftl.o 00:03:11.435 LINK spdk_trace_record 00:03:11.435 CXX test/cpp_headers/fuse_dispatcher.o 00:03:11.435 CXX test/cpp_headers/gpt_spec.o 00:03:11.435 LINK spdk_tgt 00:03:11.435 LINK bdev_svc 00:03:11.435 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.435 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.435 LINK verify 00:03:11.435 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.435 LINK ioat_perf 00:03:11.435 CXX test/cpp_headers/hexlify.o 00:03:11.697 CXX test/cpp_headers/histogram_data.o 00:03:11.697 CXX test/cpp_headers/idxd_spec.o 00:03:11.697 CXX test/cpp_headers/idxd.o 00:03:11.697 CXX test/cpp_headers/init.o 00:03:11.697 CXX test/cpp_headers/ioat.o 00:03:11.697 CXX test/cpp_headers/ioat_spec.o 00:03:11.697 CXX test/cpp_headers/iscsi_spec.o 00:03:11.697 CXX test/cpp_headers/json.o 00:03:11.697 CXX test/cpp_headers/jsonrpc.o 00:03:11.697 CXX test/cpp_headers/keyring.o 00:03:11.697 CXX test/cpp_headers/keyring_module.o 00:03:11.697 CXX test/cpp_headers/likely.o 00:03:11.697 CXX test/cpp_headers/log.o 00:03:11.697 LINK spdk_dd 00:03:11.697 CXX test/cpp_headers/lvol.o 00:03:11.697 CXX test/cpp_headers/md5.o 00:03:11.697 CXX test/cpp_headers/memory.o 00:03:11.697 CXX test/cpp_headers/mmio.o 00:03:11.697 CXX test/cpp_headers/nbd.o 00:03:11.959 CXX test/cpp_headers/net.o 00:03:11.959 CXX test/cpp_headers/notify.o 00:03:11.959 CXX test/cpp_headers/nvme.o 00:03:11.959 CXX test/cpp_headers/nvme_intel.o 00:03:11.959 LINK spdk_trace 00:03:11.959 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.959 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.959 CXX test/cpp_headers/nvme_spec.o 00:03:11.959 CXX test/cpp_headers/nvme_zns.o 00:03:11.959 CC test/event/event_perf/event_perf.o 00:03:11.959 CC test/event/reactor/reactor.o 00:03:11.959 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.959 CC examples/sock/hello_world/hello_sock.o 00:03:11.959 CC test/event/reactor_perf/reactor_perf.o 00:03:11.959 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.959 LINK pci_ut 00:03:11.959 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.959 CC examples/idxd/perf/perf.o 00:03:11.959 CC examples/vmd/led/led.o 00:03:11.959 CC test/event/app_repeat/app_repeat.o 00:03:11.959 CC examples/thread/thread/thread_ex.o 00:03:12.216 CC test/event/scheduler/scheduler.o 00:03:12.216 CXX test/cpp_headers/nvmf.o 00:03:12.216 CXX test/cpp_headers/nvmf_spec.o 00:03:12.216 CXX test/cpp_headers/nvmf_transport.o 00:03:12.216 CXX test/cpp_headers/opal.o 00:03:12.216 CXX test/cpp_headers/opal_spec.o 00:03:12.216 CXX test/cpp_headers/pci_ids.o 00:03:12.216 CXX test/cpp_headers/pipe.o 00:03:12.216 LINK test_dma 00:03:12.216 CXX test/cpp_headers/queue.o 00:03:12.216 CXX test/cpp_headers/reduce.o 00:03:12.216 CXX test/cpp_headers/rpc.o 00:03:12.216 CXX test/cpp_headers/scheduler.o 00:03:12.216 CXX test/cpp_headers/scsi.o 00:03:12.216 CXX test/cpp_headers/scsi_spec.o 00:03:12.216 CXX test/cpp_headers/sock.o 00:03:12.216 CXX test/cpp_headers/stdinc.o 00:03:12.216 LINK spdk_bdev 00:03:12.216 CXX test/cpp_headers/string.o 00:03:12.216 CXX test/cpp_headers/thread.o 00:03:12.216 LINK nvme_fuzz 00:03:12.216 CXX test/cpp_headers/trace.o 00:03:12.216 LINK reactor_perf 00:03:12.216 LINK event_perf 00:03:12.216 LINK reactor 00:03:12.216 LINK lsvmd 00:03:12.216 CXX test/cpp_headers/trace_parser.o 00:03:12.504 CXX test/cpp_headers/tree.o 00:03:12.504 CXX test/cpp_headers/ublk.o 00:03:12.504 LINK spdk_nvme 00:03:12.504 LINK led 00:03:12.504 CXX test/cpp_headers/util.o 00:03:12.504 CXX test/cpp_headers/uuid.o 00:03:12.504 CXX test/cpp_headers/version.o 00:03:12.504 LINK app_repeat 00:03:12.504 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.504 LINK mem_callbacks 00:03:12.504 LINK vhost_fuzz 00:03:12.504 CC app/vhost/vhost.o 00:03:12.504 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.504 CXX test/cpp_headers/vhost.o 00:03:12.504 CXX test/cpp_headers/vmd.o 00:03:12.504 CXX test/cpp_headers/xor.o 00:03:12.504 CXX test/cpp_headers/zipf.o 00:03:12.504 LINK hello_sock 00:03:12.504 LINK scheduler 00:03:12.787 LINK thread 00:03:12.787 LINK vhost 00:03:12.787 LINK spdk_nvme_perf 00:03:12.787 CC test/nvme/aer/aer.o 00:03:12.787 CC test/nvme/e2edp/nvme_dp.o 00:03:12.787 CC test/nvme/err_injection/err_injection.o 00:03:12.787 CC test/nvme/fdp/fdp.o 00:03:12.787 CC test/nvme/sgl/sgl.o 00:03:12.788 CC test/nvme/reset/reset.o 00:03:12.788 CC test/nvme/reserve/reserve.o 00:03:12.788 CC test/nvme/connect_stress/connect_stress.o 00:03:12.788 LINK idxd_perf 00:03:12.788 CC test/nvme/compliance/nvme_compliance.o 00:03:12.788 CC test/nvme/overhead/overhead.o 00:03:12.788 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.788 CC test/nvme/startup/startup.o 00:03:12.788 CC test/nvme/simple_copy/simple_copy.o 00:03:12.788 CC test/nvme/fused_ordering/fused_ordering.o 00:03:12.788 CC test/nvme/boot_partition/boot_partition.o 00:03:12.788 CC test/nvme/cuse/cuse.o 00:03:12.788 CC test/blobfs/mkfs/mkfs.o 00:03:13.073 CC test/accel/dif/dif.o 00:03:13.073 CC test/lvol/esnap/esnap.o 00:03:13.073 LINK spdk_nvme_identify 00:03:13.073 CC examples/nvme/abort/abort.o 00:03:13.073 CC examples/nvme/hello_world/hello_world.o 00:03:13.073 LINK spdk_top 00:03:13.073 CC examples/nvme/reconnect/reconnect.o 00:03:13.073 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:13.073 CC examples/nvme/hotplug/hotplug.o 00:03:13.073 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:13.073 CC examples/nvme/arbitration/arbitration.o 00:03:13.073 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.073 LINK startup 00:03:13.073 LINK boot_partition 00:03:13.073 LINK doorbell_aers 00:03:13.073 LINK connect_stress 00:03:13.073 LINK mkfs 00:03:13.348 LINK err_injection 00:03:13.348 LINK reserve 00:03:13.348 LINK simple_copy 00:03:13.348 CC examples/accel/perf/accel_perf.o 00:03:13.348 LINK aer 00:03:13.348 CC examples/blob/cli/blobcli.o 00:03:13.348 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.348 LINK fused_ordering 00:03:13.348 CC examples/blob/hello_world/hello_blob.o 00:03:13.348 LINK nvme_dp 00:03:13.348 LINK pmr_persistence 00:03:13.348 LINK cmb_copy 00:03:13.348 LINK sgl 00:03:13.348 LINK hotplug 00:03:13.348 LINK fdp 00:03:13.348 LINK reset 00:03:13.607 LINK hello_world 00:03:13.607 LINK overhead 00:03:13.607 LINK arbitration 00:03:13.607 LINK nvme_compliance 00:03:13.607 LINK hello_blob 00:03:13.607 LINK abort 00:03:13.607 LINK memory_ut 00:03:13.607 LINK hello_fsdev 00:03:13.865 LINK reconnect 00:03:13.865 LINK dif 00:03:13.865 LINK blobcli 00:03:13.865 LINK nvme_manage 00:03:14.122 LINK accel_perf 00:03:14.380 CC test/bdev/bdevio/bdevio.o 00:03:14.380 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.380 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.638 LINK hello_bdev 00:03:14.896 LINK iscsi_fuzz 00:03:14.896 LINK bdevio 00:03:14.896 LINK cuse 00:03:15.463 LINK bdevperf 00:03:15.721 CC examples/nvmf/nvmf/nvmf.o 00:03:16.288 LINK nvmf 00:03:20.480 LINK esnap 00:03:20.480 00:03:20.480 real 1m21.654s 00:03:20.480 user 13m9.328s 00:03:20.480 sys 2m34.973s 00:03:20.480 23:36:46 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:20.480 23:36:46 make -- common/autotest_common.sh@10 -- $ set +x 00:03:20.480 ************************************ 00:03:20.480 END TEST make 00:03:20.480 ************************************ 00:03:20.480 23:36:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:20.480 23:36:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:20.480 23:36:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:20.480 23:36:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.480 23:36:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:20.480 23:36:46 -- pm/common@44 -- $ pid=3242452 00:03:20.480 23:36:46 -- pm/common@50 -- $ kill -TERM 3242452 00:03:20.480 23:36:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.480 23:36:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:20.480 23:36:46 -- pm/common@44 -- $ pid=3242454 00:03:20.480 23:36:46 -- pm/common@50 -- $ kill -TERM 3242454 00:03:20.480 23:36:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.480 23:36:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:20.480 23:36:46 -- pm/common@44 -- $ pid=3242455 00:03:20.480 23:36:46 -- pm/common@50 -- $ kill -TERM 3242455 00:03:20.480 23:36:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.480 23:36:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:20.481 23:36:46 -- pm/common@44 -- $ pid=3242484 00:03:20.481 23:36:46 -- pm/common@50 -- $ sudo -E kill -TERM 3242484 00:03:20.481 23:36:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:20.481 23:36:46 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:20.481 23:36:46 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:20.481 23:36:46 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:20.481 23:36:46 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:20.481 23:36:46 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:20.481 23:36:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.481 23:36:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.481 23:36:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.481 23:36:46 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.481 23:36:46 -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.481 23:36:46 -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.481 23:36:46 -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.481 23:36:46 -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.481 23:36:46 -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.481 23:36:46 -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.481 23:36:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.481 23:36:46 -- scripts/common.sh@344 -- # case "$op" in 00:03:20.481 23:36:46 -- scripts/common.sh@345 -- # : 1 00:03:20.481 23:36:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.481 23:36:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.481 23:36:46 -- scripts/common.sh@365 -- # decimal 1 00:03:20.481 23:36:46 -- scripts/common.sh@353 -- # local d=1 00:03:20.481 23:36:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.481 23:36:46 -- scripts/common.sh@355 -- # echo 1 00:03:20.481 23:36:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.481 23:36:46 -- scripts/common.sh@366 -- # decimal 2 00:03:20.481 23:36:46 -- scripts/common.sh@353 -- # local d=2 00:03:20.481 23:36:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.481 23:36:46 -- scripts/common.sh@355 -- # echo 2 00:03:20.481 23:36:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.481 23:36:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.481 23:36:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.481 23:36:46 -- scripts/common.sh@368 -- # return 0 00:03:20.481 23:36:46 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.481 23:36:46 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:20.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.481 --rc genhtml_branch_coverage=1 00:03:20.481 --rc genhtml_function_coverage=1 00:03:20.481 --rc genhtml_legend=1 00:03:20.481 --rc geninfo_all_blocks=1 00:03:20.481 --rc geninfo_unexecuted_blocks=1 00:03:20.481 00:03:20.481 ' 00:03:20.481 23:36:46 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:20.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.481 --rc genhtml_branch_coverage=1 00:03:20.481 --rc genhtml_function_coverage=1 00:03:20.481 --rc genhtml_legend=1 00:03:20.481 --rc geninfo_all_blocks=1 00:03:20.481 --rc geninfo_unexecuted_blocks=1 00:03:20.481 00:03:20.481 ' 00:03:20.481 23:36:46 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:20.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.481 --rc genhtml_branch_coverage=1 00:03:20.481 --rc genhtml_function_coverage=1 00:03:20.481 --rc genhtml_legend=1 00:03:20.481 --rc geninfo_all_blocks=1 00:03:20.481 --rc geninfo_unexecuted_blocks=1 00:03:20.481 00:03:20.481 ' 00:03:20.481 23:36:46 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:20.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.481 --rc genhtml_branch_coverage=1 00:03:20.481 --rc genhtml_function_coverage=1 00:03:20.481 --rc genhtml_legend=1 00:03:20.481 --rc geninfo_all_blocks=1 00:03:20.481 --rc geninfo_unexecuted_blocks=1 00:03:20.481 00:03:20.481 ' 00:03:20.481 23:36:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:20.481 23:36:46 -- nvmf/common.sh@7 -- # uname -s 00:03:20.481 23:36:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:20.481 23:36:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:20.481 23:36:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:20.481 23:36:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:20.481 23:36:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:20.481 23:36:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:20.481 23:36:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:20.481 23:36:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:20.481 23:36:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:20.481 23:36:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:20.481 23:36:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:20.481 23:36:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:20.481 23:36:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:20.481 23:36:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:20.481 23:36:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:20.481 23:36:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:20.481 23:36:46 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:20.481 23:36:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:20.481 23:36:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:20.481 23:36:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.481 23:36:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.481 23:36:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.481 23:36:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.481 23:36:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.481 23:36:46 -- paths/export.sh@5 -- # export PATH 00:03:20.481 23:36:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.481 23:36:46 -- nvmf/common.sh@51 -- # : 0 00:03:20.481 23:36:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:20.481 23:36:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:20.481 23:36:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:20.481 23:36:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:20.481 23:36:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:20.481 23:36:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:20.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:20.481 23:36:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:20.481 23:36:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:20.481 23:36:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:20.481 23:36:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:20.481 23:36:46 -- spdk/autotest.sh@32 -- # uname -s 00:03:20.481 23:36:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:20.481 23:36:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:20.481 23:36:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.481 23:36:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:20.481 23:36:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:20.481 23:36:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:20.481 23:36:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:20.481 23:36:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:20.481 23:36:46 -- spdk/autotest.sh@48 -- # udevadm_pid=3303392 00:03:20.481 23:36:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:20.481 23:36:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:20.481 23:36:46 -- pm/common@17 -- # local monitor 00:03:20.481 23:36:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.481 23:36:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.481 23:36:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.481 23:36:46 -- pm/common@21 -- # date +%s 00:03:20.481 23:36:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.481 23:36:46 -- pm/common@21 -- # date +%s 00:03:20.481 23:36:46 -- pm/common@25 -- # sleep 1 00:03:20.481 23:36:46 -- pm/common@21 -- # date +%s 00:03:20.481 23:36:46 -- pm/common@21 -- # date +%s 00:03:20.481 23:36:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731191806 00:03:20.481 23:36:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731191806 00:03:20.481 23:36:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731191806 00:03:20.481 23:36:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731191806 00:03:20.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731191806_collect-vmstat.pm.log 00:03:20.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731191806_collect-cpu-temp.pm.log 00:03:20.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731191806_collect-cpu-load.pm.log 00:03:20.481 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731191806_collect-bmc-pm.bmc.pm.log 00:03:21.415 23:36:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:21.415 23:36:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:21.415 23:36:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:21.415 23:36:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.415 23:36:47 -- spdk/autotest.sh@59 -- # create_test_list 00:03:21.415 23:36:47 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:21.415 23:36:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.415 23:36:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:21.415 23:36:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.415 23:36:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.415 23:36:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:21.415 23:36:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.415 23:36:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:21.415 23:36:47 -- common/autotest_common.sh@1455 -- # uname 00:03:21.415 23:36:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:21.415 23:36:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:21.415 23:36:47 -- common/autotest_common.sh@1475 -- # uname 00:03:21.415 23:36:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:21.415 23:36:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:21.415 23:36:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:21.673 lcov: LCOV version 1.15 00:03:21.673 23:36:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:39.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.709 23:37:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:01.709 23:37:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.709 23:37:24 -- common/autotest_common.sh@10 -- # set +x 00:04:01.709 23:37:24 -- spdk/autotest.sh@78 -- # rm -f 00:04:01.709 23:37:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.709 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:01.709 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:01.709 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:01.709 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:01.709 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:01.709 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:01.709 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:01.709 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:01.709 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:01.709 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:01.709 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:01.709 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:01.709 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:01.709 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:01.709 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:01.709 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:01.709 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:01.709 23:37:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:01.709 23:37:26 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:01.709 23:37:26 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:01.709 23:37:26 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:01.709 23:37:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:01.709 23:37:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:01.709 23:37:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:01.709 23:37:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.709 23:37:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:01.709 23:37:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:01.709 23:37:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.709 23:37:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.709 23:37:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:01.709 23:37:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:01.709 23:37:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:01.709 No valid GPT data, bailing 00:04:01.709 23:37:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.709 23:37:26 -- scripts/common.sh@394 -- # pt= 00:04:01.709 23:37:26 -- scripts/common.sh@395 -- # return 1 00:04:01.709 23:37:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:01.709 1+0 records in 00:04:01.709 1+0 records out 00:04:01.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00244099 s, 430 MB/s 00:04:01.709 23:37:26 -- spdk/autotest.sh@105 -- # sync 00:04:01.709 23:37:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.709 23:37:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.709 23:37:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.970 23:37:28 -- spdk/autotest.sh@111 -- # uname -s 00:04:01.970 23:37:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:01.970 23:37:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:01.970 23:37:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:02.907 Hugepages 00:04:02.907 node hugesize free / total 00:04:02.907 node0 1048576kB 0 / 0 00:04:03.167 node0 2048kB 0 / 0 00:04:03.167 node1 1048576kB 0 / 0 00:04:03.167 node1 2048kB 0 / 0 00:04:03.167 00:04:03.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.167 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:03.167 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:03.167 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:03.167 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:03.167 23:37:29 -- spdk/autotest.sh@117 -- # uname -s 00:04:03.167 23:37:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:03.167 23:37:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:03.167 23:37:29 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.546 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:04.546 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:04.546 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:05.487 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.487 23:37:31 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:06.423 23:37:32 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:06.423 23:37:32 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:06.423 23:37:32 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.423 23:37:32 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:06.423 23:37:32 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:06.423 23:37:32 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:06.423 23:37:32 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.423 23:37:32 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.423 23:37:32 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:06.423 23:37:32 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:06.423 23:37:32 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:04:06.423 23:37:32 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.362 Waiting for block devices as requested 00:04:07.621 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:07.621 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:07.879 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:07.879 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:07.879 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:07.879 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:08.138 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:08.138 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:08.138 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:08.138 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:08.398 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:08.398 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:08.398 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:08.398 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:08.657 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:08.657 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:08.657 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:08.916 23:37:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:08.916 23:37:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:04:08.916 23:37:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:08.916 23:37:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:08.916 23:37:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:08.916 23:37:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:08.916 23:37:34 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:08.916 23:37:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:08.916 23:37:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:08.916 23:37:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:08.916 23:37:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:08.916 23:37:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:08.916 23:37:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:08.916 23:37:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:08.916 23:37:34 -- common/autotest_common.sh@1541 -- # continue 00:04:08.916 23:37:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:08.916 23:37:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.916 23:37:34 -- common/autotest_common.sh@10 -- # set +x 00:04:08.916 23:37:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:08.916 23:37:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.916 23:37:34 -- common/autotest_common.sh@10 -- # set +x 00:04:08.916 23:37:34 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.298 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.298 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.298 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:11.257 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.257 23:37:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:11.257 23:37:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:11.257 23:37:37 -- common/autotest_common.sh@10 -- # set +x 00:04:11.257 23:37:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:11.257 23:37:37 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:11.257 23:37:37 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:11.257 23:37:37 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:11.257 23:37:37 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:11.257 23:37:37 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:11.257 23:37:37 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:11.257 23:37:37 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:11.257 23:37:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:11.257 23:37:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:11.257 23:37:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.257 23:37:37 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:11.257 23:37:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:11.257 23:37:37 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:11.257 23:37:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:04:11.257 23:37:37 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:11.257 23:37:37 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:11.257 23:37:37 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:11.257 23:37:37 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:11.257 23:37:37 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:11.257 23:37:37 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:11.257 23:37:37 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:04:11.257 23:37:37 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:04:11.257 23:37:37 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3313809 00:04:11.257 23:37:37 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.257 23:37:37 -- common/autotest_common.sh@1583 -- # waitforlisten 3313809 00:04:11.257 23:37:37 -- common/autotest_common.sh@833 -- # '[' -z 3313809 ']' 00:04:11.257 23:37:37 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.257 23:37:37 -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.257 23:37:37 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.257 23:37:37 -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.257 23:37:37 -- common/autotest_common.sh@10 -- # set +x 00:04:11.560 [2024-11-09 23:37:37.471900] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:04:11.560 [2024-11-09 23:37:37.472056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313809 ] 00:04:11.560 [2024-11-09 23:37:37.602734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.560 [2024-11-09 23:37:37.739672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.936 23:37:38 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.936 23:37:38 -- common/autotest_common.sh@866 -- # return 0 00:04:12.936 23:37:38 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:12.936 23:37:38 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:12.936 23:37:38 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:16.226 nvme0n1 00:04:16.226 23:37:41 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:16.226 [2024-11-09 23:37:42.103135] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:16.226 [2024-11-09 23:37:42.103204] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:16.226 request: 00:04:16.226 { 00:04:16.226 "nvme_ctrlr_name": "nvme0", 00:04:16.226 "password": "test", 00:04:16.226 "method": "bdev_nvme_opal_revert", 00:04:16.226 "req_id": 1 00:04:16.226 } 00:04:16.226 Got JSON-RPC error response 00:04:16.226 response: 00:04:16.226 { 00:04:16.226 "code": -32603, 00:04:16.226 "message": "Internal error" 00:04:16.226 } 00:04:16.226 23:37:42 -- common/autotest_common.sh@1589 -- # true 00:04:16.226 23:37:42 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:16.226 23:37:42 -- common/autotest_common.sh@1593 -- # killprocess 3313809 00:04:16.226 23:37:42 -- common/autotest_common.sh@952 -- # '[' -z 3313809 ']' 00:04:16.226 23:37:42 -- common/autotest_common.sh@956 -- # kill -0 3313809 00:04:16.226 23:37:42 -- common/autotest_common.sh@957 -- # uname 00:04:16.226 23:37:42 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.227 23:37:42 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3313809 00:04:16.227 23:37:42 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.227 23:37:42 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.227 23:37:42 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3313809' 00:04:16.227 killing process with pid 3313809 00:04:16.227 23:37:42 -- common/autotest_common.sh@971 -- # kill 3313809 00:04:16.227 23:37:42 -- common/autotest_common.sh@976 -- # wait 3313809 00:04:20.426 23:37:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.426 23:37:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.426 23:37:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.426 23:37:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.426 23:37:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.426 23:37:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.426 23:37:45 -- common/autotest_common.sh@10 -- # set +x 00:04:20.426 23:37:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.426 23:37:45 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.426 23:37:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.426 23:37:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.426 23:37:45 -- common/autotest_common.sh@10 -- # set +x 00:04:20.426 ************************************ 00:04:20.426 START TEST env 00:04:20.426 ************************************ 00:04:20.426 23:37:45 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.426 * Looking for test storage... 00:04:20.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.426 23:37:45 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.426 23:37:45 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.427 23:37:45 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.427 23:37:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.427 23:37:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.427 23:37:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.427 23:37:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.427 23:37:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.427 23:37:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.427 23:37:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.427 23:37:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.427 23:37:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.427 23:37:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.427 23:37:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.427 23:37:46 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.427 23:37:46 env -- scripts/common.sh@345 -- # : 1 00:04:20.427 23:37:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.427 23:37:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.427 23:37:46 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.427 23:37:46 env -- scripts/common.sh@353 -- # local d=1 00:04:20.427 23:37:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.427 23:37:46 env -- scripts/common.sh@355 -- # echo 1 00:04:20.427 23:37:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.427 23:37:46 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.427 23:37:46 env -- scripts/common.sh@353 -- # local d=2 00:04:20.427 23:37:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.427 23:37:46 env -- scripts/common.sh@355 -- # echo 2 00:04:20.427 23:37:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.427 23:37:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.427 23:37:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.427 23:37:46 env -- scripts/common.sh@368 -- # return 0 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.427 --rc genhtml_branch_coverage=1 00:04:20.427 --rc genhtml_function_coverage=1 00:04:20.427 --rc genhtml_legend=1 00:04:20.427 --rc geninfo_all_blocks=1 00:04:20.427 --rc geninfo_unexecuted_blocks=1 00:04:20.427 00:04:20.427 ' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.427 --rc genhtml_branch_coverage=1 00:04:20.427 --rc genhtml_function_coverage=1 00:04:20.427 --rc genhtml_legend=1 00:04:20.427 --rc geninfo_all_blocks=1 00:04:20.427 --rc geninfo_unexecuted_blocks=1 00:04:20.427 00:04:20.427 ' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.427 --rc genhtml_branch_coverage=1 00:04:20.427 --rc genhtml_function_coverage=1 00:04:20.427 --rc genhtml_legend=1 00:04:20.427 --rc geninfo_all_blocks=1 00:04:20.427 --rc geninfo_unexecuted_blocks=1 00:04:20.427 00:04:20.427 ' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.427 --rc genhtml_branch_coverage=1 00:04:20.427 --rc genhtml_function_coverage=1 00:04:20.427 --rc genhtml_legend=1 00:04:20.427 --rc geninfo_all_blocks=1 00:04:20.427 --rc geninfo_unexecuted_blocks=1 00:04:20.427 00:04:20.427 ' 00:04:20.427 23:37:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.427 23:37:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.427 ************************************ 00:04:20.427 START TEST env_memory 00:04:20.427 ************************************ 00:04:20.427 23:37:46 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.427 00:04:20.427 00:04:20.427 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.427 http://cunit.sourceforge.net/ 00:04:20.427 00:04:20.427 00:04:20.427 Suite: memory 00:04:20.427 Test: alloc and free memory map ...[2024-11-09 23:37:46.122680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.427 passed 00:04:20.427 Test: mem map translation ...[2024-11-09 23:37:46.169521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.427 [2024-11-09 23:37:46.169568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.427 [2024-11-09 23:37:46.169676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.427 [2024-11-09 23:37:46.169711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.427 passed 00:04:20.427 Test: mem map registration ...[2024-11-09 23:37:46.237497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.427 [2024-11-09 23:37:46.237539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.427 passed 00:04:20.427 Test: mem map adjacent registrations ...passed 00:04:20.427 00:04:20.427 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.427 suites 1 1 n/a 0 0 00:04:20.427 tests 4 4 4 0 0 00:04:20.427 asserts 152 152 152 0 n/a 00:04:20.427 00:04:20.427 Elapsed time = 0.240 seconds 00:04:20.427 00:04:20.427 real 0m0.262s 00:04:20.427 user 0m0.242s 00:04:20.427 sys 0m0.018s 00:04:20.427 23:37:46 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.427 23:37:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.427 ************************************ 00:04:20.427 END TEST env_memory 00:04:20.427 ************************************ 00:04:20.427 23:37:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.427 23:37:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.427 23:37:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.427 ************************************ 00:04:20.427 START TEST env_vtophys 00:04:20.427 ************************************ 00:04:20.427 23:37:46 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.427 EAL: lib.eal log level changed from notice to debug 00:04:20.427 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.427 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.427 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.427 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.427 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.427 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.427 EAL: Detected lcore 6 as core 8 on socket 0 00:04:20.427 EAL: Detected lcore 7 as core 9 on socket 0 00:04:20.427 EAL: Detected lcore 8 as core 10 on socket 0 00:04:20.427 EAL: Detected lcore 9 as core 11 on socket 0 00:04:20.427 EAL: Detected lcore 10 as core 12 on socket 0 00:04:20.427 EAL: Detected lcore 11 as core 13 on socket 0 00:04:20.427 EAL: Detected lcore 12 as core 0 on socket 1 00:04:20.427 EAL: Detected lcore 13 as core 1 on socket 1 00:04:20.427 EAL: Detected lcore 14 as core 2 on socket 1 00:04:20.427 EAL: Detected lcore 15 as core 3 on socket 1 00:04:20.427 EAL: Detected lcore 16 as core 4 on socket 1 00:04:20.427 EAL: Detected lcore 17 as core 5 on socket 1 00:04:20.427 EAL: Detected lcore 18 as core 8 on socket 1 00:04:20.427 EAL: Detected lcore 19 as core 9 on socket 1 00:04:20.427 EAL: Detected lcore 20 as core 10 on socket 1 00:04:20.427 EAL: Detected lcore 21 as core 11 on socket 1 00:04:20.427 EAL: Detected lcore 22 as core 12 on socket 1 00:04:20.427 EAL: Detected lcore 23 as core 13 on socket 1 00:04:20.427 EAL: Detected lcore 24 as core 0 on socket 0 00:04:20.427 EAL: Detected lcore 25 as core 1 on socket 0 00:04:20.427 EAL: Detected lcore 26 as core 2 on socket 0 00:04:20.427 EAL: Detected lcore 27 as core 3 on socket 0 00:04:20.427 EAL: Detected lcore 28 as core 4 on socket 0 00:04:20.427 EAL: Detected lcore 29 as core 5 on socket 0 00:04:20.427 EAL: Detected lcore 30 as core 8 on socket 0 00:04:20.427 EAL: Detected lcore 31 as core 9 on socket 0 00:04:20.427 EAL: Detected lcore 32 as core 10 on socket 0 00:04:20.427 EAL: Detected lcore 33 as core 11 on socket 0 00:04:20.427 EAL: Detected lcore 34 as core 12 on socket 0 00:04:20.427 EAL: Detected lcore 35 as core 13 on socket 0 00:04:20.427 EAL: Detected lcore 36 as core 0 on socket 1 00:04:20.427 EAL: Detected lcore 37 as core 1 on socket 1 00:04:20.427 EAL: Detected lcore 38 as core 2 on socket 1 00:04:20.427 EAL: Detected lcore 39 as core 3 on socket 1 00:04:20.427 EAL: Detected lcore 40 as core 4 on socket 1 00:04:20.427 EAL: Detected lcore 41 as core 5 on socket 1 00:04:20.427 EAL: Detected lcore 42 as core 8 on socket 1 00:04:20.427 EAL: Detected lcore 43 as core 9 on socket 1 00:04:20.427 EAL: Detected lcore 44 as core 10 on socket 1 00:04:20.427 EAL: Detected lcore 45 as core 11 on socket 1 00:04:20.427 EAL: Detected lcore 46 as core 12 on socket 1 00:04:20.427 EAL: Detected lcore 47 as core 13 on socket 1 00:04:20.427 EAL: Maximum logical cores by configuration: 128 00:04:20.427 EAL: Detected CPU lcores: 48 00:04:20.428 EAL: Detected NUMA nodes: 2 00:04:20.428 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.428 EAL: Detected shared linkage of DPDK 00:04:20.428 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.428 EAL: Bus pci wants IOVA as 'DC' 00:04:20.428 EAL: Buses did not request a specific IOVA mode. 00:04:20.428 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.428 EAL: Selected IOVA mode 'VA' 00:04:20.428 EAL: Probing VFIO support... 00:04:20.428 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.428 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.428 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.428 EAL: VFIO support initialized 00:04:20.428 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.428 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.428 EAL: Setting up physically contiguous memory... 00:04:20.428 EAL: Setting maximum number of open files to 524288 00:04:20.428 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.428 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.428 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.428 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.428 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.428 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.428 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.428 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.428 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.428 EAL: Hugepages will be freed exactly as allocated. 00:04:20.428 EAL: No shared files mode enabled, IPC is disabled 00:04:20.428 EAL: No shared files mode enabled, IPC is disabled 00:04:20.428 EAL: TSC frequency is ~2700000 KHz 00:04:20.428 EAL: Main lcore 0 is ready (tid=7f7581229a40;cpuset=[0]) 00:04:20.428 EAL: Trying to obtain current memory policy. 00:04:20.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.428 EAL: Restoring previous memory policy: 0 00:04:20.428 EAL: request: mp_malloc_sync 00:04:20.428 EAL: No shared files mode enabled, IPC is disabled 00:04:20.428 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.428 EAL: No shared files mode enabled, IPC is disabled 00:04:20.428 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.428 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.428 00:04:20.428 00:04:20.428 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.428 http://cunit.sourceforge.net/ 00:04:20.428 00:04:20.428 00:04:20.428 Suite: components_suite 00:04:20.997 Test: vtophys_malloc_test ...passed 00:04:20.997 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.997 EAL: Restoring previous memory policy: 4 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.997 EAL: Trying to obtain current memory policy. 00:04:20.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.997 EAL: Restoring previous memory policy: 4 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.997 EAL: Trying to obtain current memory policy. 00:04:20.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.997 EAL: Restoring previous memory policy: 4 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.997 EAL: Trying to obtain current memory policy. 00:04:20.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.997 EAL: Restoring previous memory policy: 4 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.997 EAL: Trying to obtain current memory policy. 00:04:20.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.997 EAL: Restoring previous memory policy: 4 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.997 EAL: request: mp_malloc_sync 00:04:20.997 EAL: No shared files mode enabled, IPC is disabled 00:04:20.997 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.258 EAL: Trying to obtain current memory policy. 00:04:21.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.258 EAL: Restoring previous memory policy: 4 00:04:21.258 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.258 EAL: request: mp_malloc_sync 00:04:21.258 EAL: No shared files mode enabled, IPC is disabled 00:04:21.258 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.258 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.258 EAL: request: mp_malloc_sync 00:04:21.258 EAL: No shared files mode enabled, IPC is disabled 00:04:21.258 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.258 EAL: Trying to obtain current memory policy. 00:04:21.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.518 EAL: Restoring previous memory policy: 4 00:04:21.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.518 EAL: request: mp_malloc_sync 00:04:21.518 EAL: No shared files mode enabled, IPC is disabled 00:04:21.518 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.778 EAL: request: mp_malloc_sync 00:04:21.778 EAL: No shared files mode enabled, IPC is disabled 00:04:21.778 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.778 EAL: Trying to obtain current memory policy. 00:04:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.037 EAL: Restoring previous memory policy: 4 00:04:22.037 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.037 EAL: request: mp_malloc_sync 00:04:22.037 EAL: No shared files mode enabled, IPC is disabled 00:04:22.037 EAL: Heap on socket 0 was expanded by 258MB 00:04:22.296 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.556 EAL: request: mp_malloc_sync 00:04:22.556 EAL: No shared files mode enabled, IPC is disabled 00:04:22.556 EAL: Heap on socket 0 was shrunk by 258MB 00:04:22.816 EAL: Trying to obtain current memory policy. 00:04:22.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.076 EAL: Restoring previous memory policy: 4 00:04:23.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.076 EAL: request: mp_malloc_sync 00:04:23.076 EAL: No shared files mode enabled, IPC is disabled 00:04:23.076 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.019 EAL: request: mp_malloc_sync 00:04:24.019 EAL: No shared files mode enabled, IPC is disabled 00:04:24.019 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.958 EAL: Trying to obtain current memory policy. 00:04:24.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.218 EAL: Restoring previous memory policy: 4 00:04:25.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.218 EAL: request: mp_malloc_sync 00:04:25.218 EAL: No shared files mode enabled, IPC is disabled 00:04:25.218 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.127 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.386 EAL: request: mp_malloc_sync 00:04:27.386 EAL: No shared files mode enabled, IPC is disabled 00:04:27.386 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.768 passed 00:04:28.768 00:04:28.768 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.768 suites 1 1 n/a 0 0 00:04:28.768 tests 2 2 2 0 0 00:04:28.768 asserts 497 497 497 0 n/a 00:04:28.768 00:04:28.768 Elapsed time = 8.293 seconds 00:04:28.768 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.768 EAL: request: mp_malloc_sync 00:04:28.768 EAL: No shared files mode enabled, IPC is disabled 00:04:28.768 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.768 EAL: No shared files mode enabled, IPC is disabled 00:04:28.768 EAL: No shared files mode enabled, IPC is disabled 00:04:28.768 EAL: No shared files mode enabled, IPC is disabled 00:04:28.768 00:04:28.768 real 0m8.574s 00:04:28.768 user 0m7.436s 00:04:28.768 sys 0m1.077s 00:04:28.768 23:37:54 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.768 23:37:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.768 ************************************ 00:04:28.768 END TEST env_vtophys 00:04:28.768 ************************************ 00:04:29.027 23:37:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.027 23:37:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.027 23:37:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.027 23:37:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.027 ************************************ 00:04:29.027 START TEST env_pci 00:04:29.027 ************************************ 00:04:29.027 23:37:54 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.027 00:04:29.027 00:04:29.027 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.027 http://cunit.sourceforge.net/ 00:04:29.027 00:04:29.027 00:04:29.027 Suite: pci 00:04:29.027 Test: pci_hook ...[2024-11-09 23:37:55.020762] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3315904 has claimed it 00:04:29.027 EAL: Cannot find device (10000:00:01.0) 00:04:29.027 EAL: Failed to attach device on primary process 00:04:29.027 passed 00:04:29.027 00:04:29.027 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.027 suites 1 1 n/a 0 0 00:04:29.027 tests 1 1 1 0 0 00:04:29.027 asserts 25 25 25 0 n/a 00:04:29.027 00:04:29.027 Elapsed time = 0.043 seconds 00:04:29.027 00:04:29.027 real 0m0.092s 00:04:29.027 user 0m0.033s 00:04:29.027 sys 0m0.059s 00:04:29.027 23:37:55 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.027 23:37:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.027 ************************************ 00:04:29.027 END TEST env_pci 00:04:29.027 ************************************ 00:04:29.027 23:37:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.027 23:37:55 env -- env/env.sh@15 -- # uname 00:04:29.027 23:37:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.027 23:37:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.027 23:37:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.027 23:37:55 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:29.027 23:37:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.027 23:37:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.027 ************************************ 00:04:29.027 START TEST env_dpdk_post_init 00:04:29.027 ************************************ 00:04:29.027 23:37:55 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.027 EAL: Detected CPU lcores: 48 00:04:29.027 EAL: Detected NUMA nodes: 2 00:04:29.027 EAL: Detected shared linkage of DPDK 00:04:29.027 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.287 EAL: Selected IOVA mode 'VA' 00:04:29.287 EAL: VFIO support initialized 00:04:29.287 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.287 EAL: Using IOMMU type 1 (Type 1) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:29.287 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:29.548 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:30.487 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:33.775 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:33.775 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:33.775 Starting DPDK initialization... 00:04:33.775 Starting SPDK post initialization... 00:04:33.775 SPDK NVMe probe 00:04:33.775 Attaching to 0000:88:00.0 00:04:33.775 Attached to 0000:88:00.0 00:04:33.775 Cleaning up... 00:04:33.775 00:04:33.775 real 0m4.566s 00:04:33.775 user 0m3.116s 00:04:33.775 sys 0m0.506s 00:04:33.775 23:37:59 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.775 23:37:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.775 ************************************ 00:04:33.775 END TEST env_dpdk_post_init 00:04:33.775 ************************************ 00:04:33.775 23:37:59 env -- env/env.sh@26 -- # uname 00:04:33.775 23:37:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.775 23:37:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.775 23:37:59 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.775 23:37:59 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.775 23:37:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.775 ************************************ 00:04:33.775 START TEST env_mem_callbacks 00:04:33.775 ************************************ 00:04:33.775 23:37:59 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.775 EAL: Detected CPU lcores: 48 00:04:33.775 EAL: Detected NUMA nodes: 2 00:04:33.775 EAL: Detected shared linkage of DPDK 00:04:33.775 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.775 EAL: Selected IOVA mode 'VA' 00:04:33.775 EAL: VFIO support initialized 00:04:33.775 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.775 00:04:33.775 00:04:33.775 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.775 http://cunit.sourceforge.net/ 00:04:33.775 00:04:33.775 00:04:33.775 Suite: memory 00:04:33.775 Test: test ... 00:04:33.775 register 0x200000200000 2097152 00:04:33.775 malloc 3145728 00:04:33.775 register 0x200000400000 4194304 00:04:33.775 buf 0x2000004fffc0 len 3145728 PASSED 00:04:33.775 malloc 64 00:04:33.775 buf 0x2000004ffec0 len 64 PASSED 00:04:33.775 malloc 4194304 00:04:33.775 register 0x200000800000 6291456 00:04:33.775 buf 0x2000009fffc0 len 4194304 PASSED 00:04:33.775 free 0x2000004fffc0 3145728 00:04:33.775 free 0x2000004ffec0 64 00:04:33.775 unregister 0x200000400000 4194304 PASSED 00:04:33.775 free 0x2000009fffc0 4194304 00:04:33.775 unregister 0x200000800000 6291456 PASSED 00:04:33.775 malloc 8388608 00:04:33.775 register 0x200000400000 10485760 00:04:33.775 buf 0x2000005fffc0 len 8388608 PASSED 00:04:33.775 free 0x2000005fffc0 8388608 00:04:33.775 unregister 0x200000400000 10485760 PASSED 00:04:33.775 passed 00:04:33.775 00:04:33.775 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.775 suites 1 1 n/a 0 0 00:04:33.775 tests 1 1 1 0 0 00:04:33.775 asserts 15 15 15 0 n/a 00:04:33.775 00:04:33.775 Elapsed time = 0.060 seconds 00:04:33.775 00:04:33.775 real 0m0.179s 00:04:33.775 user 0m0.100s 00:04:33.775 sys 0m0.078s 00:04:33.775 23:37:59 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.775 23:37:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.775 ************************************ 00:04:33.775 END TEST env_mem_callbacks 00:04:33.775 ************************************ 00:04:33.775 00:04:33.775 real 0m14.050s 00:04:33.775 user 0m11.140s 00:04:33.775 sys 0m1.923s 00:04:33.775 23:37:59 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.775 23:37:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.775 ************************************ 00:04:33.775 END TEST env 00:04:33.775 ************************************ 00:04:34.033 23:37:59 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.033 23:37:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.033 23:37:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.033 23:37:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.033 ************************************ 00:04:34.033 START TEST rpc 00:04:34.033 ************************************ 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.033 * Looking for test storage... 00:04:34.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.033 23:38:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.033 23:38:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.033 23:38:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.033 23:38:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.033 23:38:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.033 23:38:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.033 23:38:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.033 23:38:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.033 23:38:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.033 23:38:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.033 23:38:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.033 23:38:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.033 23:38:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.033 23:38:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.033 23:38:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.033 --rc genhtml_branch_coverage=1 00:04:34.033 --rc genhtml_function_coverage=1 00:04:34.033 --rc genhtml_legend=1 00:04:34.033 --rc geninfo_all_blocks=1 00:04:34.033 --rc geninfo_unexecuted_blocks=1 00:04:34.033 00:04:34.033 ' 00:04:34.033 23:38:00 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.034 --rc genhtml_branch_coverage=1 00:04:34.034 --rc genhtml_function_coverage=1 00:04:34.034 --rc genhtml_legend=1 00:04:34.034 --rc geninfo_all_blocks=1 00:04:34.034 --rc geninfo_unexecuted_blocks=1 00:04:34.034 00:04:34.034 ' 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.034 --rc genhtml_branch_coverage=1 00:04:34.034 --rc genhtml_function_coverage=1 00:04:34.034 --rc genhtml_legend=1 00:04:34.034 --rc geninfo_all_blocks=1 00:04:34.034 --rc geninfo_unexecuted_blocks=1 00:04:34.034 00:04:34.034 ' 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.034 --rc genhtml_branch_coverage=1 00:04:34.034 --rc genhtml_function_coverage=1 00:04:34.034 --rc genhtml_legend=1 00:04:34.034 --rc geninfo_all_blocks=1 00:04:34.034 --rc geninfo_unexecuted_blocks=1 00:04:34.034 00:04:34.034 ' 00:04:34.034 23:38:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3316699 00:04:34.034 23:38:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:34.034 23:38:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.034 23:38:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3316699 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@833 -- # '[' -z 3316699 ']' 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.034 23:38:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.292 [2024-11-09 23:38:00.257503] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:04:34.292 [2024-11-09 23:38:00.257696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316699 ] 00:04:34.292 [2024-11-09 23:38:00.406903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.551 [2024-11-09 23:38:00.544319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.551 [2024-11-09 23:38:00.544397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3316699' to capture a snapshot of events at runtime. 00:04:34.551 [2024-11-09 23:38:00.544426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.551 [2024-11-09 23:38:00.544448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.551 [2024-11-09 23:38:00.544479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3316699 for offline analysis/debug. 00:04:34.551 [2024-11-09 23:38:00.546066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.487 23:38:01 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.487 23:38:01 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:35.487 23:38:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.487 23:38:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.487 23:38:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.487 23:38:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.487 23:38:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.487 23:38:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.487 23:38:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.487 ************************************ 00:04:35.487 START TEST rpc_integrity 00:04:35.487 ************************************ 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.487 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.487 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.487 { 00:04:35.487 "name": "Malloc0", 00:04:35.487 "aliases": [ 00:04:35.487 "37d22b22-ef37-41e1-9e58-e8ab2af5510e" 00:04:35.487 ], 00:04:35.487 "product_name": "Malloc disk", 00:04:35.487 "block_size": 512, 00:04:35.487 "num_blocks": 16384, 00:04:35.487 "uuid": "37d22b22-ef37-41e1-9e58-e8ab2af5510e", 00:04:35.487 "assigned_rate_limits": { 00:04:35.487 "rw_ios_per_sec": 0, 00:04:35.487 "rw_mbytes_per_sec": 0, 00:04:35.487 "r_mbytes_per_sec": 0, 00:04:35.487 "w_mbytes_per_sec": 0 00:04:35.487 }, 00:04:35.487 "claimed": false, 00:04:35.487 "zoned": false, 00:04:35.487 "supported_io_types": { 00:04:35.487 "read": true, 00:04:35.487 "write": true, 00:04:35.487 "unmap": true, 00:04:35.487 "flush": true, 00:04:35.487 "reset": true, 00:04:35.487 "nvme_admin": false, 00:04:35.488 "nvme_io": false, 00:04:35.488 "nvme_io_md": false, 00:04:35.488 "write_zeroes": true, 00:04:35.488 "zcopy": true, 00:04:35.488 "get_zone_info": false, 00:04:35.488 "zone_management": false, 00:04:35.488 "zone_append": false, 00:04:35.488 "compare": false, 00:04:35.488 "compare_and_write": false, 00:04:35.488 "abort": true, 00:04:35.488 "seek_hole": false, 00:04:35.488 "seek_data": false, 00:04:35.488 "copy": true, 00:04:35.488 "nvme_iov_md": false 00:04:35.488 }, 00:04:35.488 "memory_domains": [ 00:04:35.488 { 00:04:35.488 "dma_device_id": "system", 00:04:35.488 "dma_device_type": 1 00:04:35.488 }, 00:04:35.488 { 00:04:35.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.488 "dma_device_type": 2 00:04:35.488 } 00:04:35.488 ], 00:04:35.488 "driver_specific": {} 00:04:35.488 } 00:04:35.488 ]' 00:04:35.488 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.488 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.488 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.488 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.488 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.488 [2024-11-09 23:38:01.658248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.488 [2024-11-09 23:38:01.658317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.488 [2024-11-09 23:38:01.658365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:35.488 [2024-11-09 23:38:01.658390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.488 [2024-11-09 23:38:01.661240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.488 [2024-11-09 23:38:01.661278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.488 Passthru0 00:04:35.488 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.488 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.488 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.488 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.488 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.488 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.488 { 00:04:35.488 "name": "Malloc0", 00:04:35.488 "aliases": [ 00:04:35.488 "37d22b22-ef37-41e1-9e58-e8ab2af5510e" 00:04:35.488 ], 00:04:35.488 "product_name": "Malloc disk", 00:04:35.488 "block_size": 512, 00:04:35.488 "num_blocks": 16384, 00:04:35.488 "uuid": "37d22b22-ef37-41e1-9e58-e8ab2af5510e", 00:04:35.488 "assigned_rate_limits": { 00:04:35.488 "rw_ios_per_sec": 0, 00:04:35.488 "rw_mbytes_per_sec": 0, 00:04:35.488 "r_mbytes_per_sec": 0, 00:04:35.488 "w_mbytes_per_sec": 0 00:04:35.488 }, 00:04:35.488 "claimed": true, 00:04:35.488 "claim_type": "exclusive_write", 00:04:35.488 "zoned": false, 00:04:35.488 "supported_io_types": { 00:04:35.488 "read": true, 00:04:35.488 "write": true, 00:04:35.488 "unmap": true, 00:04:35.488 "flush": true, 00:04:35.488 "reset": true, 00:04:35.488 "nvme_admin": false, 00:04:35.488 "nvme_io": false, 00:04:35.488 "nvme_io_md": false, 00:04:35.488 "write_zeroes": true, 00:04:35.488 "zcopy": true, 00:04:35.488 "get_zone_info": false, 00:04:35.488 "zone_management": false, 00:04:35.488 "zone_append": false, 00:04:35.488 "compare": false, 00:04:35.488 "compare_and_write": false, 00:04:35.488 "abort": true, 00:04:35.488 "seek_hole": false, 00:04:35.488 "seek_data": false, 00:04:35.488 "copy": true, 00:04:35.488 "nvme_iov_md": false 00:04:35.488 }, 00:04:35.488 "memory_domains": [ 00:04:35.488 { 00:04:35.488 "dma_device_id": "system", 00:04:35.488 "dma_device_type": 1 00:04:35.488 }, 00:04:35.488 { 00:04:35.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.488 "dma_device_type": 2 00:04:35.488 } 00:04:35.488 ], 00:04:35.488 "driver_specific": {} 00:04:35.488 }, 00:04:35.488 { 00:04:35.488 "name": "Passthru0", 00:04:35.488 "aliases": [ 00:04:35.488 "69d62fef-6643-5df3-905f-e2e0f3b10ec9" 00:04:35.488 ], 00:04:35.488 "product_name": "passthru", 00:04:35.488 "block_size": 512, 00:04:35.488 "num_blocks": 16384, 00:04:35.488 "uuid": "69d62fef-6643-5df3-905f-e2e0f3b10ec9", 00:04:35.488 "assigned_rate_limits": { 00:04:35.488 "rw_ios_per_sec": 0, 00:04:35.488 "rw_mbytes_per_sec": 0, 00:04:35.488 "r_mbytes_per_sec": 0, 00:04:35.488 "w_mbytes_per_sec": 0 00:04:35.488 }, 00:04:35.488 "claimed": false, 00:04:35.488 "zoned": false, 00:04:35.488 "supported_io_types": { 00:04:35.488 "read": true, 00:04:35.488 "write": true, 00:04:35.488 "unmap": true, 00:04:35.488 "flush": true, 00:04:35.488 "reset": true, 00:04:35.488 "nvme_admin": false, 00:04:35.488 "nvme_io": false, 00:04:35.488 "nvme_io_md": false, 00:04:35.488 "write_zeroes": true, 00:04:35.488 "zcopy": true, 00:04:35.488 "get_zone_info": false, 00:04:35.488 "zone_management": false, 00:04:35.488 "zone_append": false, 00:04:35.488 "compare": false, 00:04:35.488 "compare_and_write": false, 00:04:35.488 "abort": true, 00:04:35.488 "seek_hole": false, 00:04:35.488 "seek_data": false, 00:04:35.488 "copy": true, 00:04:35.488 "nvme_iov_md": false 00:04:35.488 }, 00:04:35.488 "memory_domains": [ 00:04:35.488 { 00:04:35.488 "dma_device_id": "system", 00:04:35.488 "dma_device_type": 1 00:04:35.488 }, 00:04:35.488 { 00:04:35.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.488 "dma_device_type": 2 00:04:35.488 } 00:04:35.488 ], 00:04:35.488 "driver_specific": { 00:04:35.488 "passthru": { 00:04:35.488 "name": "Passthru0", 00:04:35.488 "base_bdev_name": "Malloc0" 00:04:35.488 } 00:04:35.488 } 00:04:35.488 } 00:04:35.488 ]' 00:04:35.488 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.747 23:38:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.747 00:04:35.747 real 0m0.261s 00:04:35.747 user 0m0.151s 00:04:35.747 sys 0m0.022s 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.747 23:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 ************************************ 00:04:35.747 END TEST rpc_integrity 00:04:35.747 ************************************ 00:04:35.747 23:38:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.747 23:38:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.747 23:38:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.747 23:38:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 ************************************ 00:04:35.747 START TEST rpc_plugins 00:04:35.747 ************************************ 00:04:35.747 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:35.747 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.747 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.747 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.748 { 00:04:35.748 "name": "Malloc1", 00:04:35.748 "aliases": [ 00:04:35.748 "b625dc63-a618-40c8-9bd4-d8b63a05cb92" 00:04:35.748 ], 00:04:35.748 "product_name": "Malloc disk", 00:04:35.748 "block_size": 4096, 00:04:35.748 "num_blocks": 256, 00:04:35.748 "uuid": "b625dc63-a618-40c8-9bd4-d8b63a05cb92", 00:04:35.748 "assigned_rate_limits": { 00:04:35.748 "rw_ios_per_sec": 0, 00:04:35.748 "rw_mbytes_per_sec": 0, 00:04:35.748 "r_mbytes_per_sec": 0, 00:04:35.748 "w_mbytes_per_sec": 0 00:04:35.748 }, 00:04:35.748 "claimed": false, 00:04:35.748 "zoned": false, 00:04:35.748 "supported_io_types": { 00:04:35.748 "read": true, 00:04:35.748 "write": true, 00:04:35.748 "unmap": true, 00:04:35.748 "flush": true, 00:04:35.748 "reset": true, 00:04:35.748 "nvme_admin": false, 00:04:35.748 "nvme_io": false, 00:04:35.748 "nvme_io_md": false, 00:04:35.748 "write_zeroes": true, 00:04:35.748 "zcopy": true, 00:04:35.748 "get_zone_info": false, 00:04:35.748 "zone_management": false, 00:04:35.748 "zone_append": false, 00:04:35.748 "compare": false, 00:04:35.748 "compare_and_write": false, 00:04:35.748 "abort": true, 00:04:35.748 "seek_hole": false, 00:04:35.748 "seek_data": false, 00:04:35.748 "copy": true, 00:04:35.748 "nvme_iov_md": false 00:04:35.748 }, 00:04:35.748 "memory_domains": [ 00:04:35.748 { 00:04:35.748 "dma_device_id": "system", 00:04:35.748 "dma_device_type": 1 00:04:35.748 }, 00:04:35.748 { 00:04:35.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.748 "dma_device_type": 2 00:04:35.748 } 00:04:35.748 ], 00:04:35.748 "driver_specific": {} 00:04:35.748 } 00:04:35.748 ]' 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.748 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.748 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:36.008 23:38:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:36.008 00:04:36.008 real 0m0.118s 00:04:36.008 user 0m0.073s 00:04:36.008 sys 0m0.011s 00:04:36.008 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.008 23:38:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.008 ************************************ 00:04:36.008 END TEST rpc_plugins 00:04:36.009 ************************************ 00:04:36.009 23:38:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:36.009 23:38:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.009 23:38:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.009 23:38:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.009 ************************************ 00:04:36.009 START TEST rpc_trace_cmd_test 00:04:36.009 ************************************ 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:36.009 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3316699", 00:04:36.009 "tpoint_group_mask": "0x8", 00:04:36.009 "iscsi_conn": { 00:04:36.009 "mask": "0x2", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "scsi": { 00:04:36.009 "mask": "0x4", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "bdev": { 00:04:36.009 "mask": "0x8", 00:04:36.009 "tpoint_mask": "0xffffffffffffffff" 00:04:36.009 }, 00:04:36.009 "nvmf_rdma": { 00:04:36.009 "mask": "0x10", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "nvmf_tcp": { 00:04:36.009 "mask": "0x20", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "ftl": { 00:04:36.009 "mask": "0x40", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "blobfs": { 00:04:36.009 "mask": "0x80", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "dsa": { 00:04:36.009 "mask": "0x200", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "thread": { 00:04:36.009 "mask": "0x400", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "nvme_pcie": { 00:04:36.009 "mask": "0x800", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "iaa": { 00:04:36.009 "mask": "0x1000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "nvme_tcp": { 00:04:36.009 "mask": "0x2000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "bdev_nvme": { 00:04:36.009 "mask": "0x4000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "sock": { 00:04:36.009 "mask": "0x8000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "blob": { 00:04:36.009 "mask": "0x10000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "bdev_raid": { 00:04:36.009 "mask": "0x20000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 }, 00:04:36.009 "scheduler": { 00:04:36.009 "mask": "0x40000", 00:04:36.009 "tpoint_mask": "0x0" 00:04:36.009 } 00:04:36.009 }' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.009 00:04:36.009 real 0m0.196s 00:04:36.009 user 0m0.173s 00:04:36.009 sys 0m0.017s 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.009 23:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.009 ************************************ 00:04:36.009 END TEST rpc_trace_cmd_test 00:04:36.009 ************************************ 00:04:36.269 23:38:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.269 23:38:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.269 23:38:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.269 23:38:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.269 23:38:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.269 23:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.269 ************************************ 00:04:36.269 START TEST rpc_daemon_integrity 00:04:36.269 ************************************ 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.269 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.270 { 00:04:36.270 "name": "Malloc2", 00:04:36.270 "aliases": [ 00:04:36.270 "fc62d29d-2aa2-4751-ab11-d67d04d28742" 00:04:36.270 ], 00:04:36.270 "product_name": "Malloc disk", 00:04:36.270 "block_size": 512, 00:04:36.270 "num_blocks": 16384, 00:04:36.270 "uuid": "fc62d29d-2aa2-4751-ab11-d67d04d28742", 00:04:36.270 "assigned_rate_limits": { 00:04:36.270 "rw_ios_per_sec": 0, 00:04:36.270 "rw_mbytes_per_sec": 0, 00:04:36.270 "r_mbytes_per_sec": 0, 00:04:36.270 "w_mbytes_per_sec": 0 00:04:36.270 }, 00:04:36.270 "claimed": false, 00:04:36.270 "zoned": false, 00:04:36.270 "supported_io_types": { 00:04:36.270 "read": true, 00:04:36.270 "write": true, 00:04:36.270 "unmap": true, 00:04:36.270 "flush": true, 00:04:36.270 "reset": true, 00:04:36.270 "nvme_admin": false, 00:04:36.270 "nvme_io": false, 00:04:36.270 "nvme_io_md": false, 00:04:36.270 "write_zeroes": true, 00:04:36.270 "zcopy": true, 00:04:36.270 "get_zone_info": false, 00:04:36.270 "zone_management": false, 00:04:36.270 "zone_append": false, 00:04:36.270 "compare": false, 00:04:36.270 "compare_and_write": false, 00:04:36.270 "abort": true, 00:04:36.270 "seek_hole": false, 00:04:36.270 "seek_data": false, 00:04:36.270 "copy": true, 00:04:36.270 "nvme_iov_md": false 00:04:36.270 }, 00:04:36.270 "memory_domains": [ 00:04:36.270 { 00:04:36.270 "dma_device_id": "system", 00:04:36.270 "dma_device_type": 1 00:04:36.270 }, 00:04:36.270 { 00:04:36.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.270 "dma_device_type": 2 00:04:36.270 } 00:04:36.270 ], 00:04:36.270 "driver_specific": {} 00:04:36.270 } 00:04:36.270 ]' 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.270 [2024-11-09 23:38:02.368144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.270 [2024-11-09 23:38:02.368209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.270 [2024-11-09 23:38:02.368254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:36.270 [2024-11-09 23:38:02.368280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.270 [2024-11-09 23:38:02.371092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.270 [2024-11-09 23:38:02.371128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.270 Passthru0 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.270 { 00:04:36.270 "name": "Malloc2", 00:04:36.270 "aliases": [ 00:04:36.270 "fc62d29d-2aa2-4751-ab11-d67d04d28742" 00:04:36.270 ], 00:04:36.270 "product_name": "Malloc disk", 00:04:36.270 "block_size": 512, 00:04:36.270 "num_blocks": 16384, 00:04:36.270 "uuid": "fc62d29d-2aa2-4751-ab11-d67d04d28742", 00:04:36.270 "assigned_rate_limits": { 00:04:36.270 "rw_ios_per_sec": 0, 00:04:36.270 "rw_mbytes_per_sec": 0, 00:04:36.270 "r_mbytes_per_sec": 0, 00:04:36.270 "w_mbytes_per_sec": 0 00:04:36.270 }, 00:04:36.270 "claimed": true, 00:04:36.270 "claim_type": "exclusive_write", 00:04:36.270 "zoned": false, 00:04:36.270 "supported_io_types": { 00:04:36.270 "read": true, 00:04:36.270 "write": true, 00:04:36.270 "unmap": true, 00:04:36.270 "flush": true, 00:04:36.270 "reset": true, 00:04:36.270 "nvme_admin": false, 00:04:36.270 "nvme_io": false, 00:04:36.270 "nvme_io_md": false, 00:04:36.270 "write_zeroes": true, 00:04:36.270 "zcopy": true, 00:04:36.270 "get_zone_info": false, 00:04:36.270 "zone_management": false, 00:04:36.270 "zone_append": false, 00:04:36.270 "compare": false, 00:04:36.270 "compare_and_write": false, 00:04:36.270 "abort": true, 00:04:36.270 "seek_hole": false, 00:04:36.270 "seek_data": false, 00:04:36.270 "copy": true, 00:04:36.270 "nvme_iov_md": false 00:04:36.270 }, 00:04:36.270 "memory_domains": [ 00:04:36.270 { 00:04:36.270 "dma_device_id": "system", 00:04:36.270 "dma_device_type": 1 00:04:36.270 }, 00:04:36.270 { 00:04:36.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.270 "dma_device_type": 2 00:04:36.270 } 00:04:36.270 ], 00:04:36.270 "driver_specific": {} 00:04:36.270 }, 00:04:36.270 { 00:04:36.270 "name": "Passthru0", 00:04:36.270 "aliases": [ 00:04:36.270 "3b3237db-a5f7-579d-a8a3-912741c85bbb" 00:04:36.270 ], 00:04:36.270 "product_name": "passthru", 00:04:36.270 "block_size": 512, 00:04:36.270 "num_blocks": 16384, 00:04:36.270 "uuid": "3b3237db-a5f7-579d-a8a3-912741c85bbb", 00:04:36.270 "assigned_rate_limits": { 00:04:36.270 "rw_ios_per_sec": 0, 00:04:36.270 "rw_mbytes_per_sec": 0, 00:04:36.270 "r_mbytes_per_sec": 0, 00:04:36.270 "w_mbytes_per_sec": 0 00:04:36.270 }, 00:04:36.270 "claimed": false, 00:04:36.270 "zoned": false, 00:04:36.270 "supported_io_types": { 00:04:36.270 "read": true, 00:04:36.270 "write": true, 00:04:36.270 "unmap": true, 00:04:36.270 "flush": true, 00:04:36.270 "reset": true, 00:04:36.270 "nvme_admin": false, 00:04:36.270 "nvme_io": false, 00:04:36.270 "nvme_io_md": false, 00:04:36.270 "write_zeroes": true, 00:04:36.270 "zcopy": true, 00:04:36.270 "get_zone_info": false, 00:04:36.270 "zone_management": false, 00:04:36.270 "zone_append": false, 00:04:36.270 "compare": false, 00:04:36.270 "compare_and_write": false, 00:04:36.270 "abort": true, 00:04:36.270 "seek_hole": false, 00:04:36.270 "seek_data": false, 00:04:36.270 "copy": true, 00:04:36.270 "nvme_iov_md": false 00:04:36.270 }, 00:04:36.270 "memory_domains": [ 00:04:36.270 { 00:04:36.270 "dma_device_id": "system", 00:04:36.270 "dma_device_type": 1 00:04:36.270 }, 00:04:36.270 { 00:04:36.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.270 "dma_device_type": 2 00:04:36.270 } 00:04:36.270 ], 00:04:36.270 "driver_specific": { 00:04:36.270 "passthru": { 00:04:36.270 "name": "Passthru0", 00:04:36.270 "base_bdev_name": "Malloc2" 00:04:36.270 } 00:04:36.270 } 00:04:36.270 } 00:04:36.270 ]' 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.270 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.530 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.530 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.530 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.530 23:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.530 00:04:36.530 real 0m0.252s 00:04:36.530 user 0m0.149s 00:04:36.530 sys 0m0.021s 00:04:36.530 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.530 23:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.530 ************************************ 00:04:36.530 END TEST rpc_daemon_integrity 00:04:36.530 ************************************ 00:04:36.530 23:38:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.530 23:38:02 rpc -- rpc/rpc.sh@84 -- # killprocess 3316699 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@952 -- # '[' -z 3316699 ']' 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@956 -- # kill -0 3316699 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@957 -- # uname 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3316699 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3316699' 00:04:36.530 killing process with pid 3316699 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@971 -- # kill 3316699 00:04:36.530 23:38:02 rpc -- common/autotest_common.sh@976 -- # wait 3316699 00:04:39.069 00:04:39.069 real 0m5.016s 00:04:39.069 user 0m5.505s 00:04:39.069 sys 0m0.881s 00:04:39.069 23:38:05 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.069 23:38:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.069 ************************************ 00:04:39.069 END TEST rpc 00:04:39.069 ************************************ 00:04:39.069 23:38:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.069 23:38:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.069 23:38:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.069 23:38:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.069 ************************************ 00:04:39.069 START TEST skip_rpc 00:04:39.069 ************************************ 00:04:39.069 23:38:05 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.069 * Looking for test storage... 00:04:39.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.069 23:38:05 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.069 23:38:05 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.069 23:38:05 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.069 23:38:05 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.069 23:38:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.069 23:38:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.070 23:38:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.070 --rc genhtml_branch_coverage=1 00:04:39.070 --rc genhtml_function_coverage=1 00:04:39.070 --rc genhtml_legend=1 00:04:39.070 --rc geninfo_all_blocks=1 00:04:39.070 --rc geninfo_unexecuted_blocks=1 00:04:39.070 00:04:39.070 ' 00:04:39.070 23:38:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.070 23:38:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.070 23:38:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.070 23:38:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.070 ************************************ 00:04:39.070 START TEST skip_rpc 00:04:39.070 ************************************ 00:04:39.070 23:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:39.070 23:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3317424 00:04:39.070 23:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.070 23:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.070 23:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:39.327 [2024-11-09 23:38:05.354677] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:04:39.327 [2024-11-09 23:38:05.354810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317424 ] 00:04:39.327 [2024-11-09 23:38:05.503152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.586 [2024-11-09 23:38:05.641733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3317424 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3317424 ']' 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3317424 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3317424 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3317424' 00:04:44.865 killing process with pid 3317424 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3317424 00:04:44.865 23:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3317424 00:04:46.775 00:04:46.775 real 0m7.464s 00:04:46.776 user 0m6.968s 00:04:46.776 sys 0m0.491s 00:04:46.776 23:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.776 23:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.776 ************************************ 00:04:46.776 END TEST skip_rpc 00:04:46.776 ************************************ 00:04:46.776 23:38:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.776 23:38:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.776 23:38:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.776 23:38:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.776 ************************************ 00:04:46.776 START TEST skip_rpc_with_json 00:04:46.776 ************************************ 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3318374 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3318374 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3318374 ']' 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.776 23:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.776 [2024-11-09 23:38:12.863830] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:04:46.776 [2024-11-09 23:38:12.864034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318374 ] 00:04:47.035 [2024-11-09 23:38:12.999914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.035 [2024-11-09 23:38:13.132634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.974 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.974 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:47.974 23:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.974 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.974 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.974 [2024-11-09 23:38:14.053835] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.974 request: 00:04:47.974 { 00:04:47.974 "trtype": "tcp", 00:04:47.974 "method": "nvmf_get_transports", 00:04:47.974 "req_id": 1 00:04:47.974 } 00:04:47.974 Got JSON-RPC error response 00:04:47.974 response: 00:04:47.974 { 00:04:47.974 "code": -19, 00:04:47.974 "message": "No such device" 00:04:47.974 } 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.975 [2024-11-09 23:38:14.062017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.975 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.234 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.234 23:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.234 { 00:04:48.234 "subsystems": [ 00:04:48.234 { 00:04:48.234 "subsystem": "fsdev", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "fsdev_set_opts", 00:04:48.234 "params": { 00:04:48.234 "fsdev_io_pool_size": 65535, 00:04:48.234 "fsdev_io_cache_size": 256 00:04:48.234 } 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "keyring", 00:04:48.234 "config": [] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "iobuf", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "iobuf_set_options", 00:04:48.234 "params": { 00:04:48.234 "small_pool_count": 8192, 00:04:48.234 "large_pool_count": 1024, 00:04:48.234 "small_bufsize": 8192, 00:04:48.234 "large_bufsize": 135168, 00:04:48.234 "enable_numa": false 00:04:48.234 } 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "sock", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "sock_set_default_impl", 00:04:48.234 "params": { 00:04:48.234 "impl_name": "posix" 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "sock_impl_set_options", 00:04:48.234 "params": { 00:04:48.234 "impl_name": "ssl", 00:04:48.234 "recv_buf_size": 4096, 00:04:48.234 "send_buf_size": 4096, 00:04:48.234 "enable_recv_pipe": true, 00:04:48.234 "enable_quickack": false, 00:04:48.234 "enable_placement_id": 0, 00:04:48.234 "enable_zerocopy_send_server": true, 00:04:48.234 "enable_zerocopy_send_client": false, 00:04:48.234 "zerocopy_threshold": 0, 00:04:48.234 "tls_version": 0, 00:04:48.234 "enable_ktls": false 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "sock_impl_set_options", 00:04:48.234 "params": { 00:04:48.234 "impl_name": "posix", 00:04:48.234 "recv_buf_size": 2097152, 00:04:48.234 "send_buf_size": 2097152, 00:04:48.234 "enable_recv_pipe": true, 00:04:48.234 "enable_quickack": false, 00:04:48.234 "enable_placement_id": 0, 00:04:48.234 "enable_zerocopy_send_server": true, 00:04:48.234 "enable_zerocopy_send_client": false, 00:04:48.234 "zerocopy_threshold": 0, 00:04:48.234 "tls_version": 0, 00:04:48.234 "enable_ktls": false 00:04:48.234 } 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "vmd", 00:04:48.234 "config": [] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "accel", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "accel_set_options", 00:04:48.234 "params": { 00:04:48.234 "small_cache_size": 128, 00:04:48.234 "large_cache_size": 16, 00:04:48.234 "task_count": 2048, 00:04:48.234 "sequence_count": 2048, 00:04:48.234 "buf_count": 2048 00:04:48.234 } 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "bdev", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "bdev_set_options", 00:04:48.234 "params": { 00:04:48.234 "bdev_io_pool_size": 65535, 00:04:48.234 "bdev_io_cache_size": 256, 00:04:48.234 "bdev_auto_examine": true, 00:04:48.234 "iobuf_small_cache_size": 128, 00:04:48.234 "iobuf_large_cache_size": 16 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "bdev_raid_set_options", 00:04:48.234 "params": { 00:04:48.234 "process_window_size_kb": 1024, 00:04:48.234 "process_max_bandwidth_mb_sec": 0 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "bdev_iscsi_set_options", 00:04:48.234 "params": { 00:04:48.234 "timeout_sec": 30 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "bdev_nvme_set_options", 00:04:48.234 "params": { 00:04:48.234 "action_on_timeout": "none", 00:04:48.234 "timeout_us": 0, 00:04:48.234 "timeout_admin_us": 0, 00:04:48.234 "keep_alive_timeout_ms": 10000, 00:04:48.234 "arbitration_burst": 0, 00:04:48.234 "low_priority_weight": 0, 00:04:48.234 "medium_priority_weight": 0, 00:04:48.234 "high_priority_weight": 0, 00:04:48.234 "nvme_adminq_poll_period_us": 10000, 00:04:48.234 "nvme_ioq_poll_period_us": 0, 00:04:48.234 "io_queue_requests": 0, 00:04:48.234 "delay_cmd_submit": true, 00:04:48.234 "transport_retry_count": 4, 00:04:48.234 "bdev_retry_count": 3, 00:04:48.234 "transport_ack_timeout": 0, 00:04:48.234 "ctrlr_loss_timeout_sec": 0, 00:04:48.234 "reconnect_delay_sec": 0, 00:04:48.234 "fast_io_fail_timeout_sec": 0, 00:04:48.234 "disable_auto_failback": false, 00:04:48.234 "generate_uuids": false, 00:04:48.234 "transport_tos": 0, 00:04:48.234 "nvme_error_stat": false, 00:04:48.234 "rdma_srq_size": 0, 00:04:48.234 "io_path_stat": false, 00:04:48.234 "allow_accel_sequence": false, 00:04:48.234 "rdma_max_cq_size": 0, 00:04:48.234 "rdma_cm_event_timeout_ms": 0, 00:04:48.234 "dhchap_digests": [ 00:04:48.234 "sha256", 00:04:48.234 "sha384", 00:04:48.234 "sha512" 00:04:48.234 ], 00:04:48.234 "dhchap_dhgroups": [ 00:04:48.234 "null", 00:04:48.234 "ffdhe2048", 00:04:48.234 "ffdhe3072", 00:04:48.234 "ffdhe4096", 00:04:48.234 "ffdhe6144", 00:04:48.234 "ffdhe8192" 00:04:48.234 ] 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "bdev_nvme_set_hotplug", 00:04:48.234 "params": { 00:04:48.234 "period_us": 100000, 00:04:48.234 "enable": false 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "bdev_wait_for_examine" 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "scsi", 00:04:48.234 "config": null 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "scheduler", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "framework_set_scheduler", 00:04:48.234 "params": { 00:04:48.234 "name": "static" 00:04:48.234 } 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "vhost_scsi", 00:04:48.234 "config": [] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "vhost_blk", 00:04:48.234 "config": [] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "ublk", 00:04:48.234 "config": [] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "nbd", 00:04:48.234 "config": [] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "nvmf", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "nvmf_set_config", 00:04:48.234 "params": { 00:04:48.234 "discovery_filter": "match_any", 00:04:48.234 "admin_cmd_passthru": { 00:04:48.234 "identify_ctrlr": false 00:04:48.234 }, 00:04:48.234 "dhchap_digests": [ 00:04:48.234 "sha256", 00:04:48.234 "sha384", 00:04:48.234 "sha512" 00:04:48.234 ], 00:04:48.234 "dhchap_dhgroups": [ 00:04:48.234 "null", 00:04:48.234 "ffdhe2048", 00:04:48.234 "ffdhe3072", 00:04:48.234 "ffdhe4096", 00:04:48.234 "ffdhe6144", 00:04:48.234 "ffdhe8192" 00:04:48.234 ] 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "nvmf_set_max_subsystems", 00:04:48.234 "params": { 00:04:48.234 "max_subsystems": 1024 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "nvmf_set_crdt", 00:04:48.234 "params": { 00:04:48.234 "crdt1": 0, 00:04:48.234 "crdt2": 0, 00:04:48.234 "crdt3": 0 00:04:48.234 } 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "method": "nvmf_create_transport", 00:04:48.234 "params": { 00:04:48.234 "trtype": "TCP", 00:04:48.234 "max_queue_depth": 128, 00:04:48.234 "max_io_qpairs_per_ctrlr": 127, 00:04:48.234 "in_capsule_data_size": 4096, 00:04:48.234 "max_io_size": 131072, 00:04:48.234 "io_unit_size": 131072, 00:04:48.234 "max_aq_depth": 128, 00:04:48.234 "num_shared_buffers": 511, 00:04:48.234 "buf_cache_size": 4294967295, 00:04:48.234 "dif_insert_or_strip": false, 00:04:48.234 "zcopy": false, 00:04:48.234 "c2h_success": true, 00:04:48.234 "sock_priority": 0, 00:04:48.234 "abort_timeout_sec": 1, 00:04:48.234 "ack_timeout": 0, 00:04:48.234 "data_wr_pool_size": 0 00:04:48.234 } 00:04:48.234 } 00:04:48.234 ] 00:04:48.234 }, 00:04:48.234 { 00:04:48.234 "subsystem": "iscsi", 00:04:48.234 "config": [ 00:04:48.234 { 00:04:48.234 "method": "iscsi_set_options", 00:04:48.234 "params": { 00:04:48.234 "node_base": "iqn.2016-06.io.spdk", 00:04:48.234 "max_sessions": 128, 00:04:48.234 "max_connections_per_session": 2, 00:04:48.234 "max_queue_depth": 64, 00:04:48.234 "default_time2wait": 2, 00:04:48.235 "default_time2retain": 20, 00:04:48.235 "first_burst_length": 8192, 00:04:48.235 "immediate_data": true, 00:04:48.235 "allow_duplicated_isid": false, 00:04:48.235 "error_recovery_level": 0, 00:04:48.235 "nop_timeout": 60, 00:04:48.235 "nop_in_interval": 30, 00:04:48.235 "disable_chap": false, 00:04:48.235 "require_chap": false, 00:04:48.235 "mutual_chap": false, 00:04:48.235 "chap_group": 0, 00:04:48.235 "max_large_datain_per_connection": 64, 00:04:48.235 "max_r2t_per_connection": 4, 00:04:48.235 "pdu_pool_size": 36864, 00:04:48.235 "immediate_data_pool_size": 16384, 00:04:48.235 "data_out_pool_size": 2048 00:04:48.235 } 00:04:48.235 } 00:04:48.235 ] 00:04:48.235 } 00:04:48.235 ] 00:04:48.235 } 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3318374 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3318374 ']' 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3318374 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3318374 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3318374' 00:04:48.235 killing process with pid 3318374 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3318374 00:04:48.235 23:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3318374 00:04:50.771 23:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3318864 00:04:50.771 23:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.771 23:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3318864 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3318864 ']' 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3318864 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3318864 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3318864' 00:04:56.048 killing process with pid 3318864 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3318864 00:04:56.048 23:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3318864 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.005 00:04:58.005 real 0m11.370s 00:04:58.005 user 0m10.858s 00:04:58.005 sys 0m1.084s 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.005 ************************************ 00:04:58.005 END TEST skip_rpc_with_json 00:04:58.005 ************************************ 00:04:58.005 23:38:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.005 23:38:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.005 23:38:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.005 23:38:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.005 ************************************ 00:04:58.005 START TEST skip_rpc_with_delay 00:04:58.005 ************************************ 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.005 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.264 [2024-11-09 23:38:24.278184] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.264 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:58.264 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.264 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.264 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.264 00:04:58.264 real 0m0.147s 00:04:58.264 user 0m0.070s 00:04:58.264 sys 0m0.076s 00:04:58.264 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.264 23:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.264 ************************************ 00:04:58.264 END TEST skip_rpc_with_delay 00:04:58.264 ************************************ 00:04:58.264 23:38:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.264 23:38:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.264 23:38:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.264 23:38:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.264 23:38:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.264 23:38:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.264 ************************************ 00:04:58.264 START TEST exit_on_failed_rpc_init 00:04:58.264 ************************************ 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3319775 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3319775 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3319775 ']' 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.264 23:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.524 [2024-11-09 23:38:24.475114] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:04:58.524 [2024-11-09 23:38:24.475257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319775 ] 00:04:58.524 [2024-11-09 23:38:24.620834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.784 [2024-11-09 23:38:24.759868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:59.725 23:38:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.725 [2024-11-09 23:38:25.789351] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:04:59.725 [2024-11-09 23:38:25.789495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320026 ] 00:04:59.984 [2024-11-09 23:38:25.932158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.984 [2024-11-09 23:38:26.069774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.984 [2024-11-09 23:38:26.069919] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.984 [2024-11-09 23:38:26.069983] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.984 [2024-11-09 23:38:26.070005] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3319775 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3319775 ']' 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3319775 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3319775 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3319775' 00:05:00.244 killing process with pid 3319775 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3319775 00:05:00.244 23:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3319775 00:05:02.781 00:05:02.781 real 0m4.430s 00:05:02.781 user 0m4.887s 00:05:02.781 sys 0m0.772s 00:05:02.781 23:38:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.781 23:38:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.781 ************************************ 00:05:02.781 END TEST exit_on_failed_rpc_init 00:05:02.781 ************************************ 00:05:02.781 23:38:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.781 00:05:02.781 real 0m23.759s 00:05:02.781 user 0m22.968s 00:05:02.781 sys 0m2.605s 00:05:02.781 23:38:28 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.781 23:38:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.781 ************************************ 00:05:02.781 END TEST skip_rpc 00:05:02.781 ************************************ 00:05:02.781 23:38:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:02.781 23:38:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.781 23:38:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.781 23:38:28 -- common/autotest_common.sh@10 -- # set +x 00:05:02.781 ************************************ 00:05:02.781 START TEST rpc_client 00:05:02.781 ************************************ 00:05:02.781 23:38:28 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:02.781 * Looking for test storage... 00:05:02.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:02.781 23:38:28 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:02.781 23:38:28 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:02.781 23:38:28 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.041 23:38:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.041 --rc geninfo_all_blocks=1 00:05:03.041 --rc geninfo_unexecuted_blocks=1 00:05:03.041 00:05:03.041 ' 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.041 --rc geninfo_all_blocks=1 00:05:03.041 --rc geninfo_unexecuted_blocks=1 00:05:03.041 00:05:03.041 ' 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.041 --rc geninfo_all_blocks=1 00:05:03.041 --rc geninfo_unexecuted_blocks=1 00:05:03.041 00:05:03.041 ' 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.041 --rc geninfo_all_blocks=1 00:05:03.041 --rc geninfo_unexecuted_blocks=1 00:05:03.041 00:05:03.041 ' 00:05:03.041 23:38:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:03.041 OK 00:05:03.041 23:38:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.041 00:05:03.041 real 0m0.180s 00:05:03.041 user 0m0.112s 00:05:03.041 sys 0m0.074s 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.041 23:38:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.041 ************************************ 00:05:03.041 END TEST rpc_client 00:05:03.041 ************************************ 00:05:03.041 23:38:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:03.041 23:38:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.041 23:38:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.041 23:38:29 -- common/autotest_common.sh@10 -- # set +x 00:05:03.041 ************************************ 00:05:03.041 START TEST json_config 00:05:03.041 ************************************ 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.041 23:38:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.041 23:38:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.041 23:38:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.041 23:38:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.041 23:38:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.041 23:38:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.041 23:38:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.041 23:38:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.041 23:38:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.041 23:38:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.041 23:38:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.041 23:38:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.041 23:38:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.041 23:38:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.041 23:38:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.041 --rc geninfo_all_blocks=1 00:05:03.041 --rc geninfo_unexecuted_blocks=1 00:05:03.041 00:05:03.041 ' 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.041 --rc geninfo_all_blocks=1 00:05:03.041 --rc geninfo_unexecuted_blocks=1 00:05:03.041 00:05:03.041 ' 00:05:03.041 23:38:29 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.041 --rc genhtml_branch_coverage=1 00:05:03.041 --rc genhtml_function_coverage=1 00:05:03.041 --rc genhtml_legend=1 00:05:03.042 --rc geninfo_all_blocks=1 00:05:03.042 --rc geninfo_unexecuted_blocks=1 00:05:03.042 00:05:03.042 ' 00:05:03.042 23:38:29 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.042 --rc genhtml_branch_coverage=1 00:05:03.042 --rc genhtml_function_coverage=1 00:05:03.042 --rc genhtml_legend=1 00:05:03.042 --rc geninfo_all_blocks=1 00:05:03.042 --rc geninfo_unexecuted_blocks=1 00:05:03.042 00:05:03.042 ' 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.042 23:38:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.042 23:38:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.042 23:38:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.042 23:38:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.042 23:38:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.042 23:38:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.042 23:38:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.042 23:38:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.042 23:38:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.042 23:38:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:03.042 INFO: JSON configuration test init 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:03.042 23:38:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.042 23:38:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.042 23:38:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:03.042 23:38:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.042 23:38:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.301 23:38:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:03.301 23:38:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.301 23:38:29 json_config -- json_config/common.sh@10 -- # shift 00:05:03.301 23:38:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.301 23:38:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.301 23:38:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.301 23:38:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.301 23:38:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.301 23:38:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3320568 00:05:03.301 23:38:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:03.301 23:38:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.301 Waiting for target to run... 00:05:03.301 23:38:29 json_config -- json_config/common.sh@25 -- # waitforlisten 3320568 /var/tmp/spdk_tgt.sock 00:05:03.301 23:38:29 json_config -- common/autotest_common.sh@833 -- # '[' -z 3320568 ']' 00:05:03.301 23:38:29 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.301 23:38:29 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.301 23:38:29 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.301 23:38:29 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.301 23:38:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.301 [2024-11-09 23:38:29.343215] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:03.301 [2024-11-09 23:38:29.343349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320568 ] 00:05:03.871 [2024-11-09 23:38:29.775289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.871 [2024-11-09 23:38:29.897128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.129 23:38:30 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.129 23:38:30 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:04.129 23:38:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.129 00:05:04.129 23:38:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:04.129 23:38:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:04.129 23:38:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.129 23:38:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.129 23:38:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:04.129 23:38:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:04.129 23:38:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.129 23:38:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.129 23:38:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:04.129 23:38:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:04.129 23:38:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:08.320 23:38:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.320 23:38:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:08.320 23:38:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@54 -- # sort 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:08.320 23:38:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:08.320 23:38:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.320 23:38:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.578 23:38:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:08.578 23:38:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:08.578 23:38:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:08.578 23:38:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:08.578 23:38:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:08.579 23:38:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:08.579 23:38:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:08.579 23:38:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.579 23:38:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.579 23:38:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:08.579 23:38:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:08.579 23:38:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:08.579 23:38:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.579 23:38:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.836 MallocForNvmf0 00:05:08.836 23:38:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.836 23:38:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.094 MallocForNvmf1 00:05:09.094 23:38:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.094 23:38:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.352 [2024-11-09 23:38:35.342950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.352 23:38:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.352 23:38:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.611 23:38:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.611 23:38:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.869 23:38:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.869 23:38:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.126 23:38:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.126 23:38:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.384 [2024-11-09 23:38:36.414716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.384 23:38:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:10.384 23:38:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.384 23:38:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.384 23:38:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:10.384 23:38:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.384 23:38:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.384 23:38:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:10.384 23:38:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.384 23:38:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.643 MallocBdevForConfigChangeCheck 00:05:10.643 23:38:36 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:10.643 23:38:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.643 23:38:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.643 23:38:36 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:10.643 23:38:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.212 23:38:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:11.212 INFO: shutting down applications... 00:05:11.212 23:38:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:11.213 23:38:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:11.213 23:38:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:11.213 23:38:37 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.118 Calling clear_iscsi_subsystem 00:05:13.118 Calling clear_nvmf_subsystem 00:05:13.118 Calling clear_nbd_subsystem 00:05:13.118 Calling clear_ublk_subsystem 00:05:13.118 Calling clear_vhost_blk_subsystem 00:05:13.118 Calling clear_vhost_scsi_subsystem 00:05:13.118 Calling clear_bdev_subsystem 00:05:13.118 23:38:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.118 23:38:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:13.118 23:38:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:13.118 23:38:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.118 23:38:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.118 23:38:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:13.118 23:38:39 json_config -- json_config/json_config.sh@352 -- # break 00:05:13.118 23:38:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:13.118 23:38:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:13.118 23:38:39 json_config -- json_config/common.sh@31 -- # local app=target 00:05:13.118 23:38:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.118 23:38:39 json_config -- json_config/common.sh@35 -- # [[ -n 3320568 ]] 00:05:13.118 23:38:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3320568 00:05:13.118 23:38:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.118 23:38:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.118 23:38:39 json_config -- json_config/common.sh@41 -- # kill -0 3320568 00:05:13.118 23:38:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.685 23:38:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.685 23:38:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.685 23:38:39 json_config -- json_config/common.sh@41 -- # kill -0 3320568 00:05:13.685 23:38:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.251 23:38:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.251 23:38:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.251 23:38:40 json_config -- json_config/common.sh@41 -- # kill -0 3320568 00:05:14.251 23:38:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.251 23:38:40 json_config -- json_config/common.sh@43 -- # break 00:05:14.251 23:38:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.251 23:38:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.251 SPDK target shutdown done 00:05:14.251 23:38:40 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:14.251 INFO: relaunching applications... 00:05:14.251 23:38:40 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.251 23:38:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.251 23:38:40 json_config -- json_config/common.sh@10 -- # shift 00:05:14.251 23:38:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.251 23:38:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.251 23:38:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.251 23:38:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.251 23:38:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.251 23:38:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.251 23:38:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3321905 00:05:14.251 23:38:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.251 Waiting for target to run... 00:05:14.251 23:38:40 json_config -- json_config/common.sh@25 -- # waitforlisten 3321905 /var/tmp/spdk_tgt.sock 00:05:14.251 23:38:40 json_config -- common/autotest_common.sh@833 -- # '[' -z 3321905 ']' 00:05:14.251 23:38:40 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.251 23:38:40 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.251 23:38:40 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.251 23:38:40 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.251 23:38:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.251 [2024-11-09 23:38:40.338445] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:14.251 [2024-11-09 23:38:40.338649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321905 ] 00:05:14.816 [2024-11-09 23:38:40.980167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.075 [2024-11-09 23:38:41.109661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.270 [2024-11-09 23:38:44.901901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.270 [2024-11-09 23:38:44.934451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.270 23:38:44 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.270 23:38:44 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:19.270 23:38:44 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.270 00:05:19.270 23:38:44 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:19.270 23:38:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:19.270 INFO: Checking if target configuration is the same... 00:05:19.270 23:38:44 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.270 23:38:44 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:19.270 23:38:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.270 + '[' 2 -ne 2 ']' 00:05:19.270 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.270 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.270 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.270 +++ basename /dev/fd/62 00:05:19.270 ++ mktemp /tmp/62.XXX 00:05:19.270 + tmp_file_1=/tmp/62.grk 00:05:19.270 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.270 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.270 + tmp_file_2=/tmp/spdk_tgt_config.json.H41 00:05:19.270 + ret=0 00:05:19.270 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.270 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.270 + diff -u /tmp/62.grk /tmp/spdk_tgt_config.json.H41 00:05:19.270 + echo 'INFO: JSON config files are the same' 00:05:19.270 INFO: JSON config files are the same 00:05:19.270 + rm /tmp/62.grk /tmp/spdk_tgt_config.json.H41 00:05:19.270 + exit 0 00:05:19.270 23:38:45 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:19.270 23:38:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:19.270 INFO: changing configuration and checking if this can be detected... 00:05:19.270 23:38:45 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.270 23:38:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.528 23:38:45 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.528 23:38:45 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:19.528 23:38:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.528 + '[' 2 -ne 2 ']' 00:05:19.528 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.528 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.528 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.528 +++ basename /dev/fd/62 00:05:19.528 ++ mktemp /tmp/62.XXX 00:05:19.528 + tmp_file_1=/tmp/62.cdj 00:05:19.528 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.528 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.528 + tmp_file_2=/tmp/spdk_tgt_config.json.C8d 00:05:19.528 + ret=0 00:05:19.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.096 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.096 + diff -u /tmp/62.cdj /tmp/spdk_tgt_config.json.C8d 00:05:20.096 + ret=1 00:05:20.096 + echo '=== Start of file: /tmp/62.cdj ===' 00:05:20.096 + cat /tmp/62.cdj 00:05:20.096 + echo '=== End of file: /tmp/62.cdj ===' 00:05:20.096 + echo '' 00:05:20.096 + echo '=== Start of file: /tmp/spdk_tgt_config.json.C8d ===' 00:05:20.096 + cat /tmp/spdk_tgt_config.json.C8d 00:05:20.096 + echo '=== End of file: /tmp/spdk_tgt_config.json.C8d ===' 00:05:20.096 + echo '' 00:05:20.096 + rm /tmp/62.cdj /tmp/spdk_tgt_config.json.C8d 00:05:20.096 + exit 1 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:20.096 INFO: configuration change detected. 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:20.096 23:38:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.096 23:38:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@324 -- # [[ -n 3321905 ]] 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:20.096 23:38:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.096 23:38:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.096 23:38:46 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:20.097 23:38:46 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:20.097 23:38:46 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:20.097 23:38:46 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:20.097 23:38:46 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:20.097 23:38:46 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.097 23:38:46 json_config -- json_config/json_config.sh@330 -- # killprocess 3321905 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@952 -- # '[' -z 3321905 ']' 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@956 -- # kill -0 3321905 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@957 -- # uname 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3321905 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3321905' 00:05:20.097 killing process with pid 3321905 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@971 -- # kill 3321905 00:05:20.097 23:38:46 json_config -- common/autotest_common.sh@976 -- # wait 3321905 00:05:22.632 23:38:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.632 23:38:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:22.632 23:38:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.632 23:38:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.632 23:38:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:22.632 23:38:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:22.632 INFO: Success 00:05:22.632 00:05:22.632 real 0m19.544s 00:05:22.632 user 0m21.191s 00:05:22.632 sys 0m3.083s 00:05:22.632 23:38:48 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.632 23:38:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.632 ************************************ 00:05:22.632 END TEST json_config 00:05:22.632 ************************************ 00:05:22.632 23:38:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.632 23:38:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.632 23:38:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.632 23:38:48 -- common/autotest_common.sh@10 -- # set +x 00:05:22.632 ************************************ 00:05:22.632 START TEST json_config_extra_key 00:05:22.632 ************************************ 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:22.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.632 --rc genhtml_branch_coverage=1 00:05:22.632 --rc genhtml_function_coverage=1 00:05:22.632 --rc genhtml_legend=1 00:05:22.632 --rc geninfo_all_blocks=1 00:05:22.632 --rc geninfo_unexecuted_blocks=1 00:05:22.632 00:05:22.632 ' 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:22.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.632 --rc genhtml_branch_coverage=1 00:05:22.632 --rc genhtml_function_coverage=1 00:05:22.632 --rc genhtml_legend=1 00:05:22.632 --rc geninfo_all_blocks=1 00:05:22.632 --rc geninfo_unexecuted_blocks=1 00:05:22.632 00:05:22.632 ' 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:22.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.632 --rc genhtml_branch_coverage=1 00:05:22.632 --rc genhtml_function_coverage=1 00:05:22.632 --rc genhtml_legend=1 00:05:22.632 --rc geninfo_all_blocks=1 00:05:22.632 --rc geninfo_unexecuted_blocks=1 00:05:22.632 00:05:22.632 ' 00:05:22.632 23:38:48 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:22.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.632 --rc genhtml_branch_coverage=1 00:05:22.632 --rc genhtml_function_coverage=1 00:05:22.632 --rc genhtml_legend=1 00:05:22.632 --rc geninfo_all_blocks=1 00:05:22.632 --rc geninfo_unexecuted_blocks=1 00:05:22.632 00:05:22.632 ' 00:05:22.632 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.632 23:38:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.632 23:38:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.632 23:38:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.632 23:38:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.633 23:38:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.633 23:38:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:22.633 23:38:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.633 23:38:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:22.633 23:38:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.633 23:38:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.633 23:38:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.893 23:38:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.893 23:38:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.893 23:38:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.893 23:38:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.893 23:38:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.893 23:38:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:22.893 INFO: launching applications... 00:05:22.893 23:38:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3323088 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.893 Waiting for target to run... 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.893 23:38:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3323088 /var/tmp/spdk_tgt.sock 00:05:22.893 23:38:48 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3323088 ']' 00:05:22.893 23:38:48 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.893 23:38:48 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.893 23:38:48 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.893 23:38:48 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.893 23:38:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.893 [2024-11-09 23:38:48.940185] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:22.893 [2024-11-09 23:38:48.940328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323088 ] 00:05:23.461 [2024-11-09 23:38:49.405934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.461 [2024-11-09 23:38:49.528227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.398 23:38:50 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.399 23:38:50 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.399 00:05:24.399 23:38:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.399 INFO: shutting down applications... 00:05:24.399 23:38:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3323088 ]] 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3323088 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:24.399 23:38:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.657 23:38:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.657 23:38:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.657 23:38:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:24.657 23:38:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.226 23:38:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.226 23:38:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.226 23:38:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:25.226 23:38:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.793 23:38:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.793 23:38:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.793 23:38:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:25.793 23:38:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.363 23:38:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.363 23:38:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.363 23:38:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:26.363 23:38:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.622 23:38:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.622 23:38:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.622 23:38:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:26.622 23:38:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3323088 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.188 23:38:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.188 SPDK target shutdown done 00:05:27.188 23:38:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.188 Success 00:05:27.188 00:05:27.188 real 0m4.582s 00:05:27.188 user 0m4.227s 00:05:27.188 sys 0m0.668s 00:05:27.188 23:38:53 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.188 23:38:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 ************************************ 00:05:27.188 END TEST json_config_extra_key 00:05:27.188 ************************************ 00:05:27.188 23:38:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.188 23:38:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.188 23:38:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.188 23:38:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.188 ************************************ 00:05:27.188 START TEST alias_rpc 00:05:27.188 ************************************ 00:05:27.188 23:38:53 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.188 * Looking for test storage... 00:05:27.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.188 23:38:53 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.188 23:38:53 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.188 23:38:53 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.447 23:38:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.447 --rc genhtml_branch_coverage=1 00:05:27.447 --rc genhtml_function_coverage=1 00:05:27.447 --rc genhtml_legend=1 00:05:27.447 --rc geninfo_all_blocks=1 00:05:27.447 --rc geninfo_unexecuted_blocks=1 00:05:27.447 00:05:27.447 ' 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.447 --rc genhtml_branch_coverage=1 00:05:27.447 --rc genhtml_function_coverage=1 00:05:27.447 --rc genhtml_legend=1 00:05:27.447 --rc geninfo_all_blocks=1 00:05:27.447 --rc geninfo_unexecuted_blocks=1 00:05:27.447 00:05:27.447 ' 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.447 --rc genhtml_branch_coverage=1 00:05:27.447 --rc genhtml_function_coverage=1 00:05:27.447 --rc genhtml_legend=1 00:05:27.447 --rc geninfo_all_blocks=1 00:05:27.447 --rc geninfo_unexecuted_blocks=1 00:05:27.447 00:05:27.447 ' 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.447 --rc genhtml_branch_coverage=1 00:05:27.447 --rc genhtml_function_coverage=1 00:05:27.447 --rc genhtml_legend=1 00:05:27.447 --rc geninfo_all_blocks=1 00:05:27.447 --rc geninfo_unexecuted_blocks=1 00:05:27.447 00:05:27.447 ' 00:05:27.447 23:38:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.447 23:38:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.447 23:38:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3323689 00:05:27.447 23:38:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3323689 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3323689 ']' 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.447 23:38:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.447 [2024-11-09 23:38:53.557147] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:27.447 [2024-11-09 23:38:53.557309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323689 ] 00:05:27.705 [2024-11-09 23:38:53.704540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.705 [2024-11-09 23:38:53.842480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.639 23:38:54 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.639 23:38:54 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:28.639 23:38:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.896 23:38:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3323689 00:05:28.896 23:38:55 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3323689 ']' 00:05:28.896 23:38:55 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3323689 00:05:28.896 23:38:55 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:28.896 23:38:55 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.896 23:38:55 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3323689 00:05:29.155 23:38:55 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.155 23:38:55 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.155 23:38:55 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3323689' 00:05:29.155 killing process with pid 3323689 00:05:29.155 23:38:55 alias_rpc -- common/autotest_common.sh@971 -- # kill 3323689 00:05:29.155 23:38:55 alias_rpc -- common/autotest_common.sh@976 -- # wait 3323689 00:05:31.693 00:05:31.693 real 0m4.266s 00:05:31.693 user 0m4.368s 00:05:31.693 sys 0m0.693s 00:05:31.693 23:38:57 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.693 23:38:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.693 ************************************ 00:05:31.693 END TEST alias_rpc 00:05:31.693 ************************************ 00:05:31.693 23:38:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:31.693 23:38:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.693 23:38:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.693 23:38:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.693 23:38:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.693 ************************************ 00:05:31.693 START TEST spdkcli_tcp 00:05:31.693 ************************************ 00:05:31.693 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.693 * Looking for test storage... 00:05:31.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:31.693 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.693 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.693 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.693 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.693 23:38:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.694 23:38:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.694 --rc genhtml_branch_coverage=1 00:05:31.694 --rc genhtml_function_coverage=1 00:05:31.694 --rc genhtml_legend=1 00:05:31.694 --rc geninfo_all_blocks=1 00:05:31.694 --rc geninfo_unexecuted_blocks=1 00:05:31.694 00:05:31.694 ' 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.694 --rc genhtml_branch_coverage=1 00:05:31.694 --rc genhtml_function_coverage=1 00:05:31.694 --rc genhtml_legend=1 00:05:31.694 --rc geninfo_all_blocks=1 00:05:31.694 --rc geninfo_unexecuted_blocks=1 00:05:31.694 00:05:31.694 ' 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.694 --rc genhtml_branch_coverage=1 00:05:31.694 --rc genhtml_function_coverage=1 00:05:31.694 --rc genhtml_legend=1 00:05:31.694 --rc geninfo_all_blocks=1 00:05:31.694 --rc geninfo_unexecuted_blocks=1 00:05:31.694 00:05:31.694 ' 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.694 --rc genhtml_branch_coverage=1 00:05:31.694 --rc genhtml_function_coverage=1 00:05:31.694 --rc genhtml_legend=1 00:05:31.694 --rc geninfo_all_blocks=1 00:05:31.694 --rc geninfo_unexecuted_blocks=1 00:05:31.694 00:05:31.694 ' 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3324292 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:31.694 23:38:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3324292 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3324292 ']' 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.694 23:38:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 [2024-11-09 23:38:57.883702] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:31.694 [2024-11-09 23:38:57.883857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324292 ] 00:05:31.953 [2024-11-09 23:38:58.019738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.953 [2024-11-09 23:38:58.153055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.953 [2024-11-09 23:38:58.153058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.929 23:38:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.929 23:38:59 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:32.929 23:38:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3324432 00:05:32.929 23:38:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:32.929 23:38:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.198 [ 00:05:33.198 "bdev_malloc_delete", 00:05:33.198 "bdev_malloc_create", 00:05:33.198 "bdev_null_resize", 00:05:33.198 "bdev_null_delete", 00:05:33.198 "bdev_null_create", 00:05:33.198 "bdev_nvme_cuse_unregister", 00:05:33.198 "bdev_nvme_cuse_register", 00:05:33.198 "bdev_opal_new_user", 00:05:33.198 "bdev_opal_set_lock_state", 00:05:33.198 "bdev_opal_delete", 00:05:33.198 "bdev_opal_get_info", 00:05:33.198 "bdev_opal_create", 00:05:33.198 "bdev_nvme_opal_revert", 00:05:33.198 "bdev_nvme_opal_init", 00:05:33.198 "bdev_nvme_send_cmd", 00:05:33.198 "bdev_nvme_set_keys", 00:05:33.198 "bdev_nvme_get_path_iostat", 00:05:33.198 "bdev_nvme_get_mdns_discovery_info", 00:05:33.198 "bdev_nvme_stop_mdns_discovery", 00:05:33.198 "bdev_nvme_start_mdns_discovery", 00:05:33.198 "bdev_nvme_set_multipath_policy", 00:05:33.198 "bdev_nvme_set_preferred_path", 00:05:33.198 "bdev_nvme_get_io_paths", 00:05:33.198 "bdev_nvme_remove_error_injection", 00:05:33.198 "bdev_nvme_add_error_injection", 00:05:33.198 "bdev_nvme_get_discovery_info", 00:05:33.198 "bdev_nvme_stop_discovery", 00:05:33.198 "bdev_nvme_start_discovery", 00:05:33.198 "bdev_nvme_get_controller_health_info", 00:05:33.198 "bdev_nvme_disable_controller", 00:05:33.198 "bdev_nvme_enable_controller", 00:05:33.198 "bdev_nvme_reset_controller", 00:05:33.198 "bdev_nvme_get_transport_statistics", 00:05:33.198 "bdev_nvme_apply_firmware", 00:05:33.198 "bdev_nvme_detach_controller", 00:05:33.198 "bdev_nvme_get_controllers", 00:05:33.198 "bdev_nvme_attach_controller", 00:05:33.198 "bdev_nvme_set_hotplug", 00:05:33.198 "bdev_nvme_set_options", 00:05:33.198 "bdev_passthru_delete", 00:05:33.198 "bdev_passthru_create", 00:05:33.198 "bdev_lvol_set_parent_bdev", 00:05:33.198 "bdev_lvol_set_parent", 00:05:33.198 "bdev_lvol_check_shallow_copy", 00:05:33.198 "bdev_lvol_start_shallow_copy", 00:05:33.198 "bdev_lvol_grow_lvstore", 00:05:33.198 "bdev_lvol_get_lvols", 00:05:33.198 "bdev_lvol_get_lvstores", 00:05:33.198 "bdev_lvol_delete", 00:05:33.198 "bdev_lvol_set_read_only", 00:05:33.198 "bdev_lvol_resize", 00:05:33.198 "bdev_lvol_decouple_parent", 00:05:33.198 "bdev_lvol_inflate", 00:05:33.198 "bdev_lvol_rename", 00:05:33.198 "bdev_lvol_clone_bdev", 00:05:33.198 "bdev_lvol_clone", 00:05:33.198 "bdev_lvol_snapshot", 00:05:33.198 "bdev_lvol_create", 00:05:33.198 "bdev_lvol_delete_lvstore", 00:05:33.198 "bdev_lvol_rename_lvstore", 00:05:33.198 "bdev_lvol_create_lvstore", 00:05:33.198 "bdev_raid_set_options", 00:05:33.198 "bdev_raid_remove_base_bdev", 00:05:33.198 "bdev_raid_add_base_bdev", 00:05:33.198 "bdev_raid_delete", 00:05:33.198 "bdev_raid_create", 00:05:33.198 "bdev_raid_get_bdevs", 00:05:33.198 "bdev_error_inject_error", 00:05:33.198 "bdev_error_delete", 00:05:33.198 "bdev_error_create", 00:05:33.198 "bdev_split_delete", 00:05:33.198 "bdev_split_create", 00:05:33.198 "bdev_delay_delete", 00:05:33.198 "bdev_delay_create", 00:05:33.198 "bdev_delay_update_latency", 00:05:33.198 "bdev_zone_block_delete", 00:05:33.198 "bdev_zone_block_create", 00:05:33.198 "blobfs_create", 00:05:33.198 "blobfs_detect", 00:05:33.198 "blobfs_set_cache_size", 00:05:33.198 "bdev_aio_delete", 00:05:33.198 "bdev_aio_rescan", 00:05:33.198 "bdev_aio_create", 00:05:33.198 "bdev_ftl_set_property", 00:05:33.198 "bdev_ftl_get_properties", 00:05:33.198 "bdev_ftl_get_stats", 00:05:33.198 "bdev_ftl_unmap", 00:05:33.198 "bdev_ftl_unload", 00:05:33.198 "bdev_ftl_delete", 00:05:33.198 "bdev_ftl_load", 00:05:33.198 "bdev_ftl_create", 00:05:33.198 "bdev_virtio_attach_controller", 00:05:33.198 "bdev_virtio_scsi_get_devices", 00:05:33.198 "bdev_virtio_detach_controller", 00:05:33.198 "bdev_virtio_blk_set_hotplug", 00:05:33.198 "bdev_iscsi_delete", 00:05:33.198 "bdev_iscsi_create", 00:05:33.198 "bdev_iscsi_set_options", 00:05:33.198 "accel_error_inject_error", 00:05:33.198 "ioat_scan_accel_module", 00:05:33.198 "dsa_scan_accel_module", 00:05:33.198 "iaa_scan_accel_module", 00:05:33.198 "keyring_file_remove_key", 00:05:33.198 "keyring_file_add_key", 00:05:33.198 "keyring_linux_set_options", 00:05:33.198 "fsdev_aio_delete", 00:05:33.198 "fsdev_aio_create", 00:05:33.198 "iscsi_get_histogram", 00:05:33.198 "iscsi_enable_histogram", 00:05:33.198 "iscsi_set_options", 00:05:33.198 "iscsi_get_auth_groups", 00:05:33.198 "iscsi_auth_group_remove_secret", 00:05:33.198 "iscsi_auth_group_add_secret", 00:05:33.198 "iscsi_delete_auth_group", 00:05:33.198 "iscsi_create_auth_group", 00:05:33.198 "iscsi_set_discovery_auth", 00:05:33.198 "iscsi_get_options", 00:05:33.198 "iscsi_target_node_request_logout", 00:05:33.198 "iscsi_target_node_set_redirect", 00:05:33.198 "iscsi_target_node_set_auth", 00:05:33.198 "iscsi_target_node_add_lun", 00:05:33.198 "iscsi_get_stats", 00:05:33.198 "iscsi_get_connections", 00:05:33.198 "iscsi_portal_group_set_auth", 00:05:33.198 "iscsi_start_portal_group", 00:05:33.198 "iscsi_delete_portal_group", 00:05:33.198 "iscsi_create_portal_group", 00:05:33.198 "iscsi_get_portal_groups", 00:05:33.198 "iscsi_delete_target_node", 00:05:33.198 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.198 "iscsi_target_node_add_pg_ig_maps", 00:05:33.198 "iscsi_create_target_node", 00:05:33.198 "iscsi_get_target_nodes", 00:05:33.198 "iscsi_delete_initiator_group", 00:05:33.198 "iscsi_initiator_group_remove_initiators", 00:05:33.198 "iscsi_initiator_group_add_initiators", 00:05:33.198 "iscsi_create_initiator_group", 00:05:33.198 "iscsi_get_initiator_groups", 00:05:33.198 "nvmf_set_crdt", 00:05:33.198 "nvmf_set_config", 00:05:33.198 "nvmf_set_max_subsystems", 00:05:33.198 "nvmf_stop_mdns_prr", 00:05:33.198 "nvmf_publish_mdns_prr", 00:05:33.198 "nvmf_subsystem_get_listeners", 00:05:33.198 "nvmf_subsystem_get_qpairs", 00:05:33.198 "nvmf_subsystem_get_controllers", 00:05:33.198 "nvmf_get_stats", 00:05:33.198 "nvmf_get_transports", 00:05:33.198 "nvmf_create_transport", 00:05:33.198 "nvmf_get_targets", 00:05:33.198 "nvmf_delete_target", 00:05:33.198 "nvmf_create_target", 00:05:33.198 "nvmf_subsystem_allow_any_host", 00:05:33.198 "nvmf_subsystem_set_keys", 00:05:33.198 "nvmf_subsystem_remove_host", 00:05:33.198 "nvmf_subsystem_add_host", 00:05:33.198 "nvmf_ns_remove_host", 00:05:33.198 "nvmf_ns_add_host", 00:05:33.198 "nvmf_subsystem_remove_ns", 00:05:33.198 "nvmf_subsystem_set_ns_ana_group", 00:05:33.198 "nvmf_subsystem_add_ns", 00:05:33.198 "nvmf_subsystem_listener_set_ana_state", 00:05:33.198 "nvmf_discovery_get_referrals", 00:05:33.198 "nvmf_discovery_remove_referral", 00:05:33.198 "nvmf_discovery_add_referral", 00:05:33.198 "nvmf_subsystem_remove_listener", 00:05:33.198 "nvmf_subsystem_add_listener", 00:05:33.198 "nvmf_delete_subsystem", 00:05:33.198 "nvmf_create_subsystem", 00:05:33.198 "nvmf_get_subsystems", 00:05:33.198 "env_dpdk_get_mem_stats", 00:05:33.198 "nbd_get_disks", 00:05:33.198 "nbd_stop_disk", 00:05:33.198 "nbd_start_disk", 00:05:33.198 "ublk_recover_disk", 00:05:33.198 "ublk_get_disks", 00:05:33.198 "ublk_stop_disk", 00:05:33.198 "ublk_start_disk", 00:05:33.198 "ublk_destroy_target", 00:05:33.198 "ublk_create_target", 00:05:33.198 "virtio_blk_create_transport", 00:05:33.198 "virtio_blk_get_transports", 00:05:33.198 "vhost_controller_set_coalescing", 00:05:33.198 "vhost_get_controllers", 00:05:33.198 "vhost_delete_controller", 00:05:33.198 "vhost_create_blk_controller", 00:05:33.198 "vhost_scsi_controller_remove_target", 00:05:33.198 "vhost_scsi_controller_add_target", 00:05:33.198 "vhost_start_scsi_controller", 00:05:33.198 "vhost_create_scsi_controller", 00:05:33.198 "thread_set_cpumask", 00:05:33.198 "scheduler_set_options", 00:05:33.198 "framework_get_governor", 00:05:33.198 "framework_get_scheduler", 00:05:33.198 "framework_set_scheduler", 00:05:33.198 "framework_get_reactors", 00:05:33.198 "thread_get_io_channels", 00:05:33.198 "thread_get_pollers", 00:05:33.198 "thread_get_stats", 00:05:33.198 "framework_monitor_context_switch", 00:05:33.198 "spdk_kill_instance", 00:05:33.198 "log_enable_timestamps", 00:05:33.198 "log_get_flags", 00:05:33.198 "log_clear_flag", 00:05:33.198 "log_set_flag", 00:05:33.198 "log_get_level", 00:05:33.198 "log_set_level", 00:05:33.198 "log_get_print_level", 00:05:33.198 "log_set_print_level", 00:05:33.198 "framework_enable_cpumask_locks", 00:05:33.198 "framework_disable_cpumask_locks", 00:05:33.199 "framework_wait_init", 00:05:33.199 "framework_start_init", 00:05:33.199 "scsi_get_devices", 00:05:33.199 "bdev_get_histogram", 00:05:33.199 "bdev_enable_histogram", 00:05:33.199 "bdev_set_qos_limit", 00:05:33.199 "bdev_set_qd_sampling_period", 00:05:33.199 "bdev_get_bdevs", 00:05:33.199 "bdev_reset_iostat", 00:05:33.199 "bdev_get_iostat", 00:05:33.199 "bdev_examine", 00:05:33.199 "bdev_wait_for_examine", 00:05:33.199 "bdev_set_options", 00:05:33.199 "accel_get_stats", 00:05:33.199 "accel_set_options", 00:05:33.199 "accel_set_driver", 00:05:33.199 "accel_crypto_key_destroy", 00:05:33.199 "accel_crypto_keys_get", 00:05:33.199 "accel_crypto_key_create", 00:05:33.199 "accel_assign_opc", 00:05:33.199 "accel_get_module_info", 00:05:33.199 "accel_get_opc_assignments", 00:05:33.199 "vmd_rescan", 00:05:33.199 "vmd_remove_device", 00:05:33.199 "vmd_enable", 00:05:33.199 "sock_get_default_impl", 00:05:33.199 "sock_set_default_impl", 00:05:33.199 "sock_impl_set_options", 00:05:33.199 "sock_impl_get_options", 00:05:33.199 "iobuf_get_stats", 00:05:33.199 "iobuf_set_options", 00:05:33.199 "keyring_get_keys", 00:05:33.199 "framework_get_pci_devices", 00:05:33.199 "framework_get_config", 00:05:33.199 "framework_get_subsystems", 00:05:33.199 "fsdev_set_opts", 00:05:33.199 "fsdev_get_opts", 00:05:33.199 "trace_get_info", 00:05:33.199 "trace_get_tpoint_group_mask", 00:05:33.199 "trace_disable_tpoint_group", 00:05:33.199 "trace_enable_tpoint_group", 00:05:33.199 "trace_clear_tpoint_mask", 00:05:33.199 "trace_set_tpoint_mask", 00:05:33.199 "notify_get_notifications", 00:05:33.199 "notify_get_types", 00:05:33.199 "spdk_get_version", 00:05:33.199 "rpc_get_methods" 00:05:33.199 ] 00:05:33.199 23:38:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.199 23:38:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.199 23:38:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.456 23:38:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.457 23:38:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3324292 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3324292 ']' 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3324292 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3324292 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3324292' 00:05:33.457 killing process with pid 3324292 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3324292 00:05:33.457 23:38:59 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3324292 00:05:35.992 00:05:35.992 real 0m4.195s 00:05:35.992 user 0m7.739s 00:05:35.992 sys 0m0.647s 00:05:35.992 23:39:01 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.992 23:39:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.992 ************************************ 00:05:35.992 END TEST spdkcli_tcp 00:05:35.992 ************************************ 00:05:35.992 23:39:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.992 23:39:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.992 23:39:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.992 23:39:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.992 ************************************ 00:05:35.992 START TEST dpdk_mem_utility 00:05:35.992 ************************************ 00:05:35.992 23:39:01 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.992 * Looking for test storage... 00:05:35.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:35.992 23:39:01 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.992 23:39:01 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.992 23:39:01 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.992 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.992 23:39:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.993 23:39:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.993 23:39:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.993 --rc genhtml_branch_coverage=1 00:05:35.993 --rc genhtml_function_coverage=1 00:05:35.993 --rc genhtml_legend=1 00:05:35.993 --rc geninfo_all_blocks=1 00:05:35.993 --rc geninfo_unexecuted_blocks=1 00:05:35.993 00:05:35.993 ' 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.993 --rc genhtml_branch_coverage=1 00:05:35.993 --rc genhtml_function_coverage=1 00:05:35.993 --rc genhtml_legend=1 00:05:35.993 --rc geninfo_all_blocks=1 00:05:35.993 --rc geninfo_unexecuted_blocks=1 00:05:35.993 00:05:35.993 ' 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.993 --rc genhtml_branch_coverage=1 00:05:35.993 --rc genhtml_function_coverage=1 00:05:35.993 --rc genhtml_legend=1 00:05:35.993 --rc geninfo_all_blocks=1 00:05:35.993 --rc geninfo_unexecuted_blocks=1 00:05:35.993 00:05:35.993 ' 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.993 --rc genhtml_branch_coverage=1 00:05:35.993 --rc genhtml_function_coverage=1 00:05:35.993 --rc genhtml_legend=1 00:05:35.993 --rc geninfo_all_blocks=1 00:05:35.993 --rc geninfo_unexecuted_blocks=1 00:05:35.993 00:05:35.993 ' 00:05:35.993 23:39:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.993 23:39:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3325015 00:05:35.993 23:39:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.993 23:39:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3325015 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3325015 ']' 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.993 23:39:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.993 [2024-11-09 23:39:02.121260] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:35.993 [2024-11-09 23:39:02.121406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325015 ] 00:05:36.253 [2024-11-09 23:39:02.255081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.253 [2024-11-09 23:39:02.387576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.189 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.189 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:37.189 23:39:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.189 23:39:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.189 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.189 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.189 { 00:05:37.189 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.189 } 00:05:37.189 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.189 23:39:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.450 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:37.450 1 heaps totaling size 816.000000 MiB 00:05:37.450 size: 816.000000 MiB heap id: 0 00:05:37.450 end heaps---------- 00:05:37.450 9 mempools totaling size 595.772034 MiB 00:05:37.450 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.450 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.450 size: 92.545471 MiB name: bdev_io_3325015 00:05:37.450 size: 50.003479 MiB name: msgpool_3325015 00:05:37.450 size: 36.509338 MiB name: fsdev_io_3325015 00:05:37.450 size: 21.763794 MiB name: PDU_Pool 00:05:37.450 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.450 size: 4.133484 MiB name: evtpool_3325015 00:05:37.450 size: 0.026123 MiB name: Session_Pool 00:05:37.450 end mempools------- 00:05:37.450 6 memzones totaling size 4.142822 MiB 00:05:37.450 size: 1.000366 MiB name: RG_ring_0_3325015 00:05:37.450 size: 1.000366 MiB name: RG_ring_1_3325015 00:05:37.450 size: 1.000366 MiB name: RG_ring_4_3325015 00:05:37.450 size: 1.000366 MiB name: RG_ring_5_3325015 00:05:37.450 size: 0.125366 MiB name: RG_ring_2_3325015 00:05:37.450 size: 0.015991 MiB name: RG_ring_3_3325015 00:05:37.450 end memzones------- 00:05:37.450 23:39:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.450 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:37.450 list of free elements. size: 16.857605 MiB 00:05:37.450 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:37.450 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:37.450 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:37.450 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:37.450 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:37.450 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:37.450 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:37.450 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:37.450 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:37.450 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:37.450 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:37.450 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:37.450 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:37.450 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:37.450 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:37.450 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:37.450 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:37.450 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:37.450 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:37.451 list of standard malloc elements. size: 199.221497 MiB 00:05:37.451 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:37.451 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:37.451 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:37.451 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:37.451 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:37.451 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:37.451 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:37.451 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:37.451 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:37.451 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:37.451 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:37.451 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:37.451 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:37.451 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:37.451 list of memzone associated elements. size: 599.920898 MiB 00:05:37.451 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:37.451 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.451 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:37.451 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.451 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:37.451 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3325015_0 00:05:37.451 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:37.451 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3325015_0 00:05:37.451 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:37.451 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3325015_0 00:05:37.451 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:37.451 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.451 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:37.451 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.451 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:37.451 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3325015_0 00:05:37.451 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:37.451 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3325015 00:05:37.451 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:37.451 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3325015 00:05:37.451 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:37.451 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.451 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:37.451 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.451 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:37.451 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.451 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:37.451 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.451 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:37.451 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3325015 00:05:37.451 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:37.451 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3325015 00:05:37.451 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:37.451 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3325015 00:05:37.451 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:37.451 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3325015 00:05:37.451 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:37.451 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3325015 00:05:37.451 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:37.451 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3325015 00:05:37.451 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:37.451 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.451 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:37.451 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.451 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:37.451 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.451 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:37.451 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3325015 00:05:37.451 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:37.451 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3325015 00:05:37.451 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:37.451 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.451 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:37.451 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.451 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:37.451 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3325015 00:05:37.451 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:37.451 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.451 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:37.451 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3325015 00:05:37.451 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:37.451 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3325015 00:05:37.451 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:37.451 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3325015 00:05:37.451 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:37.451 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.451 23:39:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.451 23:39:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3325015 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3325015 ']' 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3325015 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3325015 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3325015' 00:05:37.451 killing process with pid 3325015 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3325015 00:05:37.451 23:39:03 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3325015 00:05:39.990 00:05:39.990 real 0m4.045s 00:05:39.990 user 0m4.053s 00:05:39.990 sys 0m0.666s 00:05:39.990 23:39:05 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.990 23:39:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.990 ************************************ 00:05:39.990 END TEST dpdk_mem_utility 00:05:39.990 ************************************ 00:05:39.990 23:39:05 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.990 23:39:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.990 23:39:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.990 23:39:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.990 ************************************ 00:05:39.990 START TEST event 00:05:39.990 ************************************ 00:05:39.990 23:39:05 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.990 * Looking for test storage... 00:05:39.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.990 23:39:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.990 23:39:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.990 23:39:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.990 23:39:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.990 23:39:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.990 23:39:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.990 23:39:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.990 23:39:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.990 23:39:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.990 23:39:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.990 23:39:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.990 23:39:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:39.990 23:39:06 event -- scripts/common.sh@345 -- # : 1 00:05:39.990 23:39:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.990 23:39:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.990 23:39:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:39.990 23:39:06 event -- scripts/common.sh@353 -- # local d=1 00:05:39.990 23:39:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.990 23:39:06 event -- scripts/common.sh@355 -- # echo 1 00:05:39.990 23:39:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.990 23:39:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:39.990 23:39:06 event -- scripts/common.sh@353 -- # local d=2 00:05:39.990 23:39:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.990 23:39:06 event -- scripts/common.sh@355 -- # echo 2 00:05:39.990 23:39:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.990 23:39:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.990 23:39:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.990 23:39:06 event -- scripts/common.sh@368 -- # return 0 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.990 --rc genhtml_branch_coverage=1 00:05:39.990 --rc genhtml_function_coverage=1 00:05:39.990 --rc genhtml_legend=1 00:05:39.990 --rc geninfo_all_blocks=1 00:05:39.990 --rc geninfo_unexecuted_blocks=1 00:05:39.990 00:05:39.990 ' 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.990 --rc genhtml_branch_coverage=1 00:05:39.990 --rc genhtml_function_coverage=1 00:05:39.990 --rc genhtml_legend=1 00:05:39.990 --rc geninfo_all_blocks=1 00:05:39.990 --rc geninfo_unexecuted_blocks=1 00:05:39.990 00:05:39.990 ' 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.990 --rc genhtml_branch_coverage=1 00:05:39.990 --rc genhtml_function_coverage=1 00:05:39.990 --rc genhtml_legend=1 00:05:39.990 --rc geninfo_all_blocks=1 00:05:39.990 --rc geninfo_unexecuted_blocks=1 00:05:39.990 00:05:39.990 ' 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.990 --rc genhtml_branch_coverage=1 00:05:39.990 --rc genhtml_function_coverage=1 00:05:39.990 --rc genhtml_legend=1 00:05:39.990 --rc geninfo_all_blocks=1 00:05:39.990 --rc geninfo_unexecuted_blocks=1 00:05:39.990 00:05:39.990 ' 00:05:39.990 23:39:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.990 23:39:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.990 23:39:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:39.990 23:39:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.990 23:39:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.990 ************************************ 00:05:39.990 START TEST event_perf 00:05:39.990 ************************************ 00:05:39.990 23:39:06 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.990 Running I/O for 1 seconds...[2024-11-09 23:39:06.182412] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:39.990 [2024-11-09 23:39:06.182549] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325488 ] 00:05:40.248 [2024-11-09 23:39:06.340430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.508 [2024-11-09 23:39:06.487035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.508 [2024-11-09 23:39:06.487114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.508 [2024-11-09 23:39:06.487220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.508 [2024-11-09 23:39:06.487230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.885 Running I/O for 1 seconds... 00:05:41.885 lcore 0: 221193 00:05:41.885 lcore 1: 221192 00:05:41.885 lcore 2: 221191 00:05:41.885 lcore 3: 221192 00:05:41.885 done. 00:05:41.885 00:05:41.885 real 0m1.607s 00:05:41.885 user 0m4.417s 00:05:41.885 sys 0m0.174s 00:05:41.885 23:39:07 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.885 23:39:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.885 ************************************ 00:05:41.885 END TEST event_perf 00:05:41.885 ************************************ 00:05:41.885 23:39:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.886 23:39:07 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:41.886 23:39:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.886 23:39:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.886 ************************************ 00:05:41.886 START TEST event_reactor 00:05:41.886 ************************************ 00:05:41.886 23:39:07 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.886 [2024-11-09 23:39:07.832431] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:41.886 [2024-11-09 23:39:07.832550] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326024 ] 00:05:41.886 [2024-11-09 23:39:07.975111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.145 [2024-11-09 23:39:08.112355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.522 test_start 00:05:43.522 oneshot 00:05:43.522 tick 100 00:05:43.522 tick 100 00:05:43.522 tick 250 00:05:43.522 tick 100 00:05:43.522 tick 100 00:05:43.522 tick 250 00:05:43.522 tick 100 00:05:43.522 tick 500 00:05:43.522 tick 100 00:05:43.522 tick 100 00:05:43.522 tick 250 00:05:43.522 tick 100 00:05:43.522 tick 100 00:05:43.522 test_end 00:05:43.522 00:05:43.522 real 0m1.566s 00:05:43.522 user 0m1.421s 00:05:43.522 sys 0m0.136s 00:05:43.522 23:39:09 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.522 23:39:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:43.522 ************************************ 00:05:43.522 END TEST event_reactor 00:05:43.522 ************************************ 00:05:43.522 23:39:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.522 23:39:09 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:43.522 23:39:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.522 23:39:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.522 ************************************ 00:05:43.522 START TEST event_reactor_perf 00:05:43.522 ************************************ 00:05:43.522 23:39:09 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.522 [2024-11-09 23:39:09.444271] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:43.522 [2024-11-09 23:39:09.444378] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326434 ] 00:05:43.522 [2024-11-09 23:39:09.584789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.787 [2024-11-09 23:39:09.725269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.167 test_start 00:05:45.167 test_end 00:05:45.167 Performance: 266103 events per second 00:05:45.167 00:05:45.167 real 0m1.570s 00:05:45.167 user 0m1.414s 00:05:45.167 sys 0m0.146s 00:05:45.167 23:39:10 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.167 23:39:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.167 ************************************ 00:05:45.167 END TEST event_reactor_perf 00:05:45.167 ************************************ 00:05:45.167 23:39:10 event -- event/event.sh@49 -- # uname -s 00:05:45.167 23:39:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.167 23:39:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.167 23:39:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.167 23:39:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.167 23:39:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.167 ************************************ 00:05:45.167 START TEST event_scheduler 00:05:45.167 ************************************ 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.168 * Looking for test storage... 00:05:45.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.168 23:39:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.168 --rc genhtml_branch_coverage=1 00:05:45.168 --rc genhtml_function_coverage=1 00:05:45.168 --rc genhtml_legend=1 00:05:45.168 --rc geninfo_all_blocks=1 00:05:45.168 --rc geninfo_unexecuted_blocks=1 00:05:45.168 00:05:45.168 ' 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.168 --rc genhtml_branch_coverage=1 00:05:45.168 --rc genhtml_function_coverage=1 00:05:45.168 --rc genhtml_legend=1 00:05:45.168 --rc geninfo_all_blocks=1 00:05:45.168 --rc geninfo_unexecuted_blocks=1 00:05:45.168 00:05:45.168 ' 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.168 --rc genhtml_branch_coverage=1 00:05:45.168 --rc genhtml_function_coverage=1 00:05:45.168 --rc genhtml_legend=1 00:05:45.168 --rc geninfo_all_blocks=1 00:05:45.168 --rc geninfo_unexecuted_blocks=1 00:05:45.168 00:05:45.168 ' 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.168 --rc genhtml_branch_coverage=1 00:05:45.168 --rc genhtml_function_coverage=1 00:05:45.168 --rc genhtml_legend=1 00:05:45.168 --rc geninfo_all_blocks=1 00:05:45.168 --rc geninfo_unexecuted_blocks=1 00:05:45.168 00:05:45.168 ' 00:05:45.168 23:39:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.168 23:39:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3326757 00:05:45.168 23:39:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.168 23:39:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.168 23:39:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3326757 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3326757 ']' 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.168 23:39:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.168 [2024-11-09 23:39:11.244232] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:45.168 [2024-11-09 23:39:11.244377] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326757 ] 00:05:45.428 [2024-11-09 23:39:11.377300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.429 [2024-11-09 23:39:11.497607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.429 [2024-11-09 23:39:11.497660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.429 [2024-11-09 23:39:11.497701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.429 [2024-11-09 23:39:11.497712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.365 23:39:12 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.365 23:39:12 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:46.365 23:39:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.365 23:39:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.365 23:39:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.365 [2024-11-09 23:39:12.216837] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:46.365 [2024-11-09 23:39:12.216902] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:46.365 [2024-11-09 23:39:12.216935] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:46.365 [2024-11-09 23:39:12.216954] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:46.365 [2024-11-09 23:39:12.216974] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:46.365 23:39:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.366 23:39:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.366 23:39:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.366 23:39:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.366 [2024-11-09 23:39:12.523122] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.366 23:39:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.366 23:39:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.366 23:39:12 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.366 23:39:12 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.366 23:39:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.366 ************************************ 00:05:46.366 START TEST scheduler_create_thread 00:05:46.366 ************************************ 00:05:46.366 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:46.366 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:46.366 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.366 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.625 2 00:05:46.625 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.625 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:46.625 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 3 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 4 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 5 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 6 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 7 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 8 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 9 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 10 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.626 23:39:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 23:39:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.195 00:05:47.195 real 0m0.599s 00:05:47.195 user 0m0.010s 00:05:47.195 sys 0m0.005s 00:05:47.195 23:39:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.195 23:39:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 ************************************ 00:05:47.195 END TEST scheduler_create_thread 00:05:47.195 ************************************ 00:05:47.195 23:39:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:47.195 23:39:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3326757 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3326757 ']' 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3326757 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3326757 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3326757' 00:05:47.195 killing process with pid 3326757 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3326757 00:05:47.195 23:39:13 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3326757 00:05:47.455 [2024-11-09 23:39:13.632124] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.835 00:05:48.836 real 0m3.620s 00:05:48.836 user 0m7.458s 00:05:48.836 sys 0m0.502s 00:05:48.836 23:39:14 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.836 23:39:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.836 ************************************ 00:05:48.836 END TEST event_scheduler 00:05:48.836 ************************************ 00:05:48.836 23:39:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.836 23:39:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.836 23:39:14 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.836 23:39:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.836 23:39:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.836 ************************************ 00:05:48.836 START TEST app_repeat 00:05:48.836 ************************************ 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3327207 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3327207' 00:05:48.836 Process app_repeat pid: 3327207 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.836 spdk_app_start Round 0 00:05:48.836 23:39:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3327207 /var/tmp/spdk-nbd.sock 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3327207 ']' 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.836 23:39:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.836 [2024-11-09 23:39:14.752830] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:05:48.836 [2024-11-09 23:39:14.752989] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327207 ] 00:05:48.836 [2024-11-09 23:39:14.889299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.836 [2024-11-09 23:39:15.017944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.836 [2024-11-09 23:39:15.017951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.770 23:39:15 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.770 23:39:15 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:49.770 23:39:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.027 Malloc0 00:05:50.028 23:39:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.286 Malloc1 00:05:50.286 23:39:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.286 23:39:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.855 /dev/nbd0 00:05:50.855 23:39:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.855 23:39:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.855 1+0 records in 00:05:50.855 1+0 records out 00:05:50.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191667 s, 21.4 MB/s 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:50.855 23:39:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:50.855 23:39:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.855 23:39:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.855 23:39:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.114 /dev/nbd1 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.114 1+0 records in 00:05:51.114 1+0 records out 00:05:51.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025523 s, 16.0 MB/s 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:51.114 23:39:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.114 23:39:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.372 { 00:05:51.372 "nbd_device": "/dev/nbd0", 00:05:51.372 "bdev_name": "Malloc0" 00:05:51.372 }, 00:05:51.372 { 00:05:51.372 "nbd_device": "/dev/nbd1", 00:05:51.372 "bdev_name": "Malloc1" 00:05:51.372 } 00:05:51.372 ]' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.372 { 00:05:51.372 "nbd_device": "/dev/nbd0", 00:05:51.372 "bdev_name": "Malloc0" 00:05:51.372 }, 00:05:51.372 { 00:05:51.372 "nbd_device": "/dev/nbd1", 00:05:51.372 "bdev_name": "Malloc1" 00:05:51.372 } 00:05:51.372 ]' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.372 /dev/nbd1' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.372 /dev/nbd1' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.372 256+0 records in 00:05:51.372 256+0 records out 00:05:51.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380222 s, 276 MB/s 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.372 256+0 records in 00:05:51.372 256+0 records out 00:05:51.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243893 s, 43.0 MB/s 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.372 256+0 records in 00:05:51.372 256+0 records out 00:05:51.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282631 s, 37.1 MB/s 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.372 23:39:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.631 23:39:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.199 23:39:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.199 23:39:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.199 23:39:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.199 23:39:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.199 23:39:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.199 23:39:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.200 23:39:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.458 23:39:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.458 23:39:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.716 23:39:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.095 [2024-11-09 23:39:20.071327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.095 [2024-11-09 23:39:20.208314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.095 [2024-11-09 23:39:20.208317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.360 [2024-11-09 23:39:20.427584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.360 [2024-11-09 23:39:20.427705] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.740 23:39:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.740 23:39:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.740 spdk_app_start Round 1 00:05:55.740 23:39:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3327207 /var/tmp/spdk-nbd.sock 00:05:55.740 23:39:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3327207 ']' 00:05:55.740 23:39:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.740 23:39:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.740 23:39:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.740 23:39:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.740 23:39:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.998 23:39:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.998 23:39:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:55.998 23:39:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.568 Malloc0 00:05:56.568 23:39:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.827 Malloc1 00:05:56.827 23:39:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.827 23:39:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.086 /dev/nbd0 00:05:57.086 23:39:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.086 23:39:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.086 1+0 records in 00:05:57.086 1+0 records out 00:05:57.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188817 s, 21.7 MB/s 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:57.086 23:39:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:57.086 23:39:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.086 23:39:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.086 23:39:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.345 /dev/nbd1 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.345 1+0 records in 00:05:57.345 1+0 records out 00:05:57.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245098 s, 16.7 MB/s 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:57.345 23:39:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.345 23:39:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.604 23:39:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.604 { 00:05:57.604 "nbd_device": "/dev/nbd0", 00:05:57.604 "bdev_name": "Malloc0" 00:05:57.604 }, 00:05:57.604 { 00:05:57.604 "nbd_device": "/dev/nbd1", 00:05:57.604 "bdev_name": "Malloc1" 00:05:57.604 } 00:05:57.604 ]' 00:05:57.604 23:39:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.604 { 00:05:57.604 "nbd_device": "/dev/nbd0", 00:05:57.604 "bdev_name": "Malloc0" 00:05:57.604 }, 00:05:57.604 { 00:05:57.604 "nbd_device": "/dev/nbd1", 00:05:57.604 "bdev_name": "Malloc1" 00:05:57.604 } 00:05:57.604 ]' 00:05:57.604 23:39:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.862 /dev/nbd1' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.862 /dev/nbd1' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.862 256+0 records in 00:05:57.862 256+0 records out 00:05:57.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515459 s, 203 MB/s 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.862 256+0 records in 00:05:57.862 256+0 records out 00:05:57.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024433 s, 42.9 MB/s 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.862 256+0 records in 00:05:57.862 256+0 records out 00:05:57.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292488 s, 35.9 MB/s 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.862 23:39:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.863 23:39:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.863 23:39:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.863 23:39:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.863 23:39:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.121 23:39:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.379 23:39:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.637 23:39:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.897 23:39:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.897 23:39:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.897 23:39:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.897 23:39:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.156 23:39:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.538 [2024-11-09 23:39:26.478381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.538 [2024-11-09 23:39:26.606111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.538 [2024-11-09 23:39:26.606112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.797 [2024-11-09 23:39:26.817556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.797 [2024-11-09 23:39:26.817658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.177 23:39:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.177 23:39:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.177 spdk_app_start Round 2 00:06:02.177 23:39:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3327207 /var/tmp/spdk-nbd.sock 00:06:02.177 23:39:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3327207 ']' 00:06:02.177 23:39:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.177 23:39:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.177 23:39:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.177 23:39:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.177 23:39:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.435 23:39:28 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.435 23:39:28 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:02.435 23:39:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.693 Malloc0 00:06:02.693 23:39:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.262 Malloc1 00:06:03.262 23:39:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.262 23:39:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.521 /dev/nbd0 00:06:03.521 23:39:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.521 23:39:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.521 1+0 records in 00:06:03.521 1+0 records out 00:06:03.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279125 s, 14.7 MB/s 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.521 23:39:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.521 23:39:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.521 23:39:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.521 23:39:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.779 /dev/nbd1 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.779 1+0 records in 00:06:03.779 1+0 records out 00:06:03.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185051 s, 22.1 MB/s 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.779 23:39:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.779 23:39:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.036 23:39:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.036 { 00:06:04.036 "nbd_device": "/dev/nbd0", 00:06:04.036 "bdev_name": "Malloc0" 00:06:04.036 }, 00:06:04.036 { 00:06:04.036 "nbd_device": "/dev/nbd1", 00:06:04.036 "bdev_name": "Malloc1" 00:06:04.037 } 00:06:04.037 ]' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.037 { 00:06:04.037 "nbd_device": "/dev/nbd0", 00:06:04.037 "bdev_name": "Malloc0" 00:06:04.037 }, 00:06:04.037 { 00:06:04.037 "nbd_device": "/dev/nbd1", 00:06:04.037 "bdev_name": "Malloc1" 00:06:04.037 } 00:06:04.037 ]' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.037 /dev/nbd1' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.037 /dev/nbd1' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.037 256+0 records in 00:06:04.037 256+0 records out 00:06:04.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515127 s, 204 MB/s 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.037 256+0 records in 00:06:04.037 256+0 records out 00:06:04.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263211 s, 39.8 MB/s 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.037 23:39:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.295 256+0 records in 00:06:04.295 256+0 records out 00:06:04.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031476 s, 33.3 MB/s 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.295 23:39:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.553 23:39:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.811 23:39:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.070 23:39:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.070 23:39:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.638 23:39:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.638 [2024-11-09 23:39:32.836008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.897 [2024-11-09 23:39:32.970965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.897 [2024-11-09 23:39:32.970972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.155 [2024-11-09 23:39:33.186820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.155 [2024-11-09 23:39:33.186907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.529 23:39:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3327207 /var/tmp/spdk-nbd.sock 00:06:08.529 23:39:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3327207 ']' 00:06:08.529 23:39:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.529 23:39:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.529 23:39:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.529 23:39:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.529 23:39:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:08.787 23:39:34 event.app_repeat -- event/event.sh@39 -- # killprocess 3327207 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3327207 ']' 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3327207 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3327207 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3327207' 00:06:08.787 killing process with pid 3327207 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3327207 00:06:08.787 23:39:34 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3327207 00:06:10.163 spdk_app_start is called in Round 0. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 reinitialization... 00:06:10.163 spdk_app_start is called in Round 1. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 reinitialization... 00:06:10.163 spdk_app_start is called in Round 2. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 reinitialization... 00:06:10.163 spdk_app_start is called in Round 3. 00:06:10.163 Shutdown signal received, stop current app iteration 00:06:10.163 23:39:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.163 23:39:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.163 00:06:10.163 real 0m21.276s 00:06:10.163 user 0m45.344s 00:06:10.163 sys 0m3.388s 00:06:10.163 23:39:35 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.163 23:39:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.163 ************************************ 00:06:10.163 END TEST app_repeat 00:06:10.163 ************************************ 00:06:10.163 23:39:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.163 23:39:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.163 23:39:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.163 23:39:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.163 23:39:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.163 ************************************ 00:06:10.163 START TEST cpu_locks 00:06:10.163 ************************************ 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.163 * Looking for test storage... 00:06:10.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.163 23:39:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.163 --rc genhtml_branch_coverage=1 00:06:10.163 --rc genhtml_function_coverage=1 00:06:10.163 --rc genhtml_legend=1 00:06:10.163 --rc geninfo_all_blocks=1 00:06:10.163 --rc geninfo_unexecuted_blocks=1 00:06:10.163 00:06:10.163 ' 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.163 --rc genhtml_branch_coverage=1 00:06:10.163 --rc genhtml_function_coverage=1 00:06:10.163 --rc genhtml_legend=1 00:06:10.163 --rc geninfo_all_blocks=1 00:06:10.163 --rc geninfo_unexecuted_blocks=1 00:06:10.163 00:06:10.163 ' 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.163 --rc genhtml_branch_coverage=1 00:06:10.163 --rc genhtml_function_coverage=1 00:06:10.163 --rc genhtml_legend=1 00:06:10.163 --rc geninfo_all_blocks=1 00:06:10.163 --rc geninfo_unexecuted_blocks=1 00:06:10.163 00:06:10.163 ' 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.163 --rc genhtml_branch_coverage=1 00:06:10.163 --rc genhtml_function_coverage=1 00:06:10.163 --rc genhtml_legend=1 00:06:10.163 --rc geninfo_all_blocks=1 00:06:10.163 --rc geninfo_unexecuted_blocks=1 00:06:10.163 00:06:10.163 ' 00:06:10.163 23:39:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.163 23:39:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.163 23:39:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.163 23:39:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.163 23:39:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.163 ************************************ 00:06:10.163 START TEST default_locks 00:06:10.163 ************************************ 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3329973 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3329973 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3329973 ']' 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.163 23:39:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.163 [2024-11-09 23:39:36.302912] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:10.164 [2024-11-09 23:39:36.303062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329973 ] 00:06:10.422 [2024-11-09 23:39:36.445416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.422 [2024-11-09 23:39:36.583083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.358 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.358 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:11.358 23:39:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3329973 00:06:11.358 23:39:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3329973 00:06:11.358 23:39:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.924 lslocks: write error 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3329973 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3329973 ']' 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3329973 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3329973 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3329973' 00:06:11.924 killing process with pid 3329973 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3329973 00:06:11.924 23:39:37 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3329973 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3329973 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3329973 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3329973 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3329973 ']' 00:06:14.453 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3329973) - No such process 00:06:14.454 ERROR: process (pid: 3329973) is no longer running 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.454 00:06:14.454 real 0m4.204s 00:06:14.454 user 0m4.203s 00:06:14.454 sys 0m0.742s 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.454 23:39:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.454 ************************************ 00:06:14.454 END TEST default_locks 00:06:14.454 ************************************ 00:06:14.454 23:39:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.454 23:39:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.454 23:39:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.454 23:39:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.454 ************************************ 00:06:14.454 START TEST default_locks_via_rpc 00:06:14.454 ************************************ 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3330528 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3330528 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3330528 ']' 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.454 23:39:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.454 [2024-11-09 23:39:40.563507] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:14.454 [2024-11-09 23:39:40.563686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330528 ] 00:06:14.712 [2024-11-09 23:39:40.704545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.712 [2024-11-09 23:39:40.842285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3330528 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3330528 00:06:15.647 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.905 23:39:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3330528 00:06:15.905 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3330528 ']' 00:06:15.905 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3330528 00:06:15.905 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:15.905 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.905 23:39:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3330528 00:06:15.905 23:39:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.905 23:39:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.905 23:39:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3330528' 00:06:15.905 killing process with pid 3330528 00:06:15.905 23:39:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3330528 00:06:15.905 23:39:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3330528 00:06:18.435 00:06:18.435 real 0m3.972s 00:06:18.435 user 0m3.963s 00:06:18.435 sys 0m0.709s 00:06:18.435 23:39:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.435 23:39:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.435 ************************************ 00:06:18.435 END TEST default_locks_via_rpc 00:06:18.435 ************************************ 00:06:18.435 23:39:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.435 23:39:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.435 23:39:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.435 23:39:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.435 ************************************ 00:06:18.435 START TEST non_locking_app_on_locked_coremask 00:06:18.435 ************************************ 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3330968 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3330968 /var/tmp/spdk.sock 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3330968 ']' 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.435 23:39:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.435 [2024-11-09 23:39:44.585671] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:18.435 [2024-11-09 23:39:44.585817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330968 ] 00:06:18.693 [2024-11-09 23:39:44.727006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.694 [2024-11-09 23:39:44.859652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3331192 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3331192 /var/tmp/spdk2.sock 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3331192 ']' 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.628 23:39:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.887 [2024-11-09 23:39:45.874769] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:19.887 [2024-11-09 23:39:45.874920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331192 ] 00:06:19.887 [2024-11-09 23:39:46.079548] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.887 [2024-11-09 23:39:46.079632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.453 [2024-11-09 23:39:46.363976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.354 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.354 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:22.354 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3330968 00:06:22.354 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3330968 00:06:22.354 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.919 lslocks: write error 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3330968 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3330968 ']' 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3330968 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3330968 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3330968' 00:06:22.919 killing process with pid 3330968 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3330968 00:06:22.919 23:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3330968 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3331192 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3331192 ']' 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3331192 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3331192 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3331192' 00:06:28.184 killing process with pid 3331192 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3331192 00:06:28.184 23:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3331192 00:06:30.082 00:06:30.082 real 0m11.757s 00:06:30.082 user 0m12.105s 00:06:30.082 sys 0m1.429s 00:06:30.082 23:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.082 23:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.082 ************************************ 00:06:30.082 END TEST non_locking_app_on_locked_coremask 00:06:30.082 ************************************ 00:06:30.082 23:39:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.083 23:39:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.083 23:39:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.083 23:39:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.341 ************************************ 00:06:30.341 START TEST locking_app_on_unlocked_coremask 00:06:30.341 ************************************ 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3332457 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3332457 /var/tmp/spdk.sock 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3332457 ']' 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.341 23:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.341 [2024-11-09 23:39:56.389395] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:30.341 [2024-11-09 23:39:56.389528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332457 ] 00:06:30.341 [2024-11-09 23:39:56.523281] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.341 [2024-11-09 23:39:56.523351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.599 [2024-11-09 23:39:56.656829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.533 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.533 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:31.533 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3332599 00:06:31.533 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.533 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3332599 /var/tmp/spdk2.sock 00:06:31.534 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3332599 ']' 00:06:31.534 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.534 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.534 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.534 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.534 23:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.534 [2024-11-09 23:39:57.691692] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:31.534 [2024-11-09 23:39:57.691830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332599 ] 00:06:31.792 [2024-11-09 23:39:57.919389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.049 [2024-11-09 23:39:58.198655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3332599 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3332599 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.587 lslocks: write error 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3332457 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3332457 ']' 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3332457 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.587 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3332457 00:06:34.847 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.847 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.847 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3332457' 00:06:34.847 killing process with pid 3332457 00:06:34.847 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3332457 00:06:34.847 23:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3332457 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3332599 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3332599 ']' 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3332599 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3332599 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3332599' 00:06:40.119 killing process with pid 3332599 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3332599 00:06:40.119 23:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3332599 00:06:42.028 00:06:42.028 real 0m11.768s 00:06:42.028 user 0m12.146s 00:06:42.028 sys 0m1.459s 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.028 ************************************ 00:06:42.028 END TEST locking_app_on_unlocked_coremask 00:06:42.028 ************************************ 00:06:42.028 23:40:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:42.028 23:40:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:42.028 23:40:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:42.028 23:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.028 ************************************ 00:06:42.028 START TEST locking_app_on_locked_coremask 00:06:42.028 ************************************ 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3333833 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3333833 /var/tmp/spdk.sock 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3333833 ']' 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.028 23:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.028 [2024-11-09 23:40:08.208875] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:42.028 [2024-11-09 23:40:08.209029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333833 ] 00:06:42.287 [2024-11-09 23:40:08.344258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.288 [2024-11-09 23:40:08.476336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3333975 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3333975 /var/tmp/spdk2.sock 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3333975 /var/tmp/spdk2.sock 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3333975 /var/tmp/spdk2.sock 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3333975 ']' 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.668 23:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.668 [2024-11-09 23:40:09.548477] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:43.668 [2024-11-09 23:40:09.548672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333975 ] 00:06:43.668 [2024-11-09 23:40:09.766598] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3333833 has claimed it. 00:06:43.668 [2024-11-09 23:40:09.766706] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3333975) - No such process 00:06:44.235 ERROR: process (pid: 3333975) is no longer running 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3333833 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3333833 00:06:44.235 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.495 lslocks: write error 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3333833 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3333833 ']' 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3333833 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3333833 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3333833' 00:06:44.495 killing process with pid 3333833 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3333833 00:06:44.495 23:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3333833 00:06:47.035 00:06:47.035 real 0m4.829s 00:06:47.035 user 0m5.124s 00:06:47.035 sys 0m0.965s 00:06:47.035 23:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.035 23:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.035 ************************************ 00:06:47.035 END TEST locking_app_on_locked_coremask 00:06:47.035 ************************************ 00:06:47.035 23:40:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.035 23:40:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.035 23:40:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.035 23:40:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.035 ************************************ 00:06:47.035 START TEST locking_overlapped_coremask 00:06:47.036 ************************************ 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3334497 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3334497 /var/tmp/spdk.sock 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3334497 ']' 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.036 23:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.036 [2024-11-09 23:40:13.084681] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:47.036 [2024-11-09 23:40:13.084818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334497 ] 00:06:47.036 [2024-11-09 23:40:13.221479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.295 [2024-11-09 23:40:13.358854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.295 [2024-11-09 23:40:13.358920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.295 [2024-11-09 23:40:13.358926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3334668 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3334668 /var/tmp/spdk2.sock 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3334668 /var/tmp/spdk2.sock 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3334668 /var/tmp/spdk2.sock 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3334668 ']' 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.285 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.285 [2024-11-09 23:40:14.317053] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:48.285 [2024-11-09 23:40:14.317202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334668 ] 00:06:48.544 [2024-11-09 23:40:14.514602] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3334497 has claimed it. 00:06:48.544 [2024-11-09 23:40:14.514683] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3334668) - No such process 00:06:48.803 ERROR: process (pid: 3334668) is no longer running 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3334497 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3334497 ']' 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3334497 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:48.803 23:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3334497 00:06:49.061 23:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.061 23:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.062 23:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3334497' 00:06:49.062 killing process with pid 3334497 00:06:49.062 23:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3334497 00:06:49.062 23:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3334497 00:06:51.595 00:06:51.595 real 0m4.189s 00:06:51.595 user 0m11.417s 00:06:51.595 sys 0m0.764s 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 END TEST locking_overlapped_coremask 00:06:51.595 ************************************ 00:06:51.595 23:40:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.595 23:40:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.595 23:40:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.595 23:40:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 START TEST locking_overlapped_coremask_via_rpc 00:06:51.595 ************************************ 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3334978 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3334978 /var/tmp/spdk.sock 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3334978 ']' 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.595 23:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 [2024-11-09 23:40:17.330827] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:51.595 [2024-11-09 23:40:17.331002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334978 ] 00:06:51.595 [2024-11-09 23:40:17.466435] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.595 [2024-11-09 23:40:17.466512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.595 [2024-11-09 23:40:17.603598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.595 [2024-11-09 23:40:17.603656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.595 [2024-11-09 23:40:17.603663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3335118 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3335118 /var/tmp/spdk2.sock 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3335118 ']' 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.528 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.528 [2024-11-09 23:40:18.548961] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:06:52.528 [2024-11-09 23:40:18.549110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335118 ] 00:06:52.786 [2024-11-09 23:40:18.752779] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.786 [2024-11-09 23:40:18.752836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.044 [2024-11-09 23:40:19.009813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.044 [2024-11-09 23:40:19.013654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.044 [2024-11-09 23:40:19.013663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.574 [2024-11-09 23:40:21.273756] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3334978 has claimed it. 00:06:55.574 request: 00:06:55.574 { 00:06:55.574 "method": "framework_enable_cpumask_locks", 00:06:55.574 "req_id": 1 00:06:55.574 } 00:06:55.574 Got JSON-RPC error response 00:06:55.574 response: 00:06:55.574 { 00:06:55.574 "code": -32603, 00:06:55.574 "message": "Failed to claim CPU core: 2" 00:06:55.574 } 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3334978 /var/tmp/spdk.sock 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3334978 ']' 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.574 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3335118 /var/tmp/spdk2.sock 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3335118 ']' 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.575 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.833 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.833 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:55.833 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:55.833 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.834 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.834 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.834 00:06:55.834 real 0m4.588s 00:06:55.834 user 0m1.599s 00:06:55.834 sys 0m0.236s 00:06:55.834 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.834 23:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 ************************************ 00:06:55.834 END TEST locking_overlapped_coremask_via_rpc 00:06:55.834 ************************************ 00:06:55.834 23:40:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:55.834 23:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3334978 ]] 00:06:55.834 23:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3334978 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3334978 ']' 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3334978 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3334978 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3334978' 00:06:55.834 killing process with pid 3334978 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3334978 00:06:55.834 23:40:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3334978 00:06:58.371 23:40:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3335118 ]] 00:06:58.371 23:40:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3335118 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3335118 ']' 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3335118 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3335118 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3335118' 00:06:58.371 killing process with pid 3335118 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3335118 00:06:58.371 23:40:24 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3335118 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3334978 ]] 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3334978 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3334978 ']' 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3334978 00:07:00.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3334978) - No such process 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3334978 is not found' 00:07:00.279 Process with pid 3334978 is not found 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3335118 ]] 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3335118 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3335118 ']' 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3335118 00:07:00.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3335118) - No such process 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3335118 is not found' 00:07:00.279 Process with pid 3335118 is not found 00:07:00.279 23:40:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.279 00:07:00.279 real 0m50.279s 00:07:00.279 user 1m25.799s 00:07:00.279 sys 0m7.562s 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.279 23:40:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.279 ************************************ 00:07:00.279 END TEST cpu_locks 00:07:00.279 ************************************ 00:07:00.279 00:07:00.279 real 1m20.355s 00:07:00.279 user 2m26.076s 00:07:00.279 sys 0m12.146s 00:07:00.279 23:40:26 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.279 23:40:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.279 ************************************ 00:07:00.279 END TEST event 00:07:00.279 ************************************ 00:07:00.279 23:40:26 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:00.279 23:40:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.279 23:40:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.279 23:40:26 -- common/autotest_common.sh@10 -- # set +x 00:07:00.279 ************************************ 00:07:00.279 START TEST thread 00:07:00.279 ************************************ 00:07:00.279 23:40:26 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:00.279 * Looking for test storage... 00:07:00.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:00.279 23:40:26 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.279 23:40:26 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.279 23:40:26 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.555 23:40:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.555 23:40:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.555 23:40:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.555 23:40:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.555 23:40:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.555 23:40:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.555 23:40:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.555 23:40:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.555 23:40:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.555 23:40:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.555 23:40:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.555 23:40:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:00.555 23:40:26 thread -- scripts/common.sh@345 -- # : 1 00:07:00.555 23:40:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.555 23:40:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.555 23:40:26 thread -- scripts/common.sh@365 -- # decimal 1 00:07:00.555 23:40:26 thread -- scripts/common.sh@353 -- # local d=1 00:07:00.555 23:40:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.555 23:40:26 thread -- scripts/common.sh@355 -- # echo 1 00:07:00.555 23:40:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.555 23:40:26 thread -- scripts/common.sh@366 -- # decimal 2 00:07:00.555 23:40:26 thread -- scripts/common.sh@353 -- # local d=2 00:07:00.555 23:40:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.555 23:40:26 thread -- scripts/common.sh@355 -- # echo 2 00:07:00.555 23:40:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.555 23:40:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.555 23:40:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.555 23:40:26 thread -- scripts/common.sh@368 -- # return 0 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.555 --rc genhtml_branch_coverage=1 00:07:00.555 --rc genhtml_function_coverage=1 00:07:00.555 --rc genhtml_legend=1 00:07:00.555 --rc geninfo_all_blocks=1 00:07:00.555 --rc geninfo_unexecuted_blocks=1 00:07:00.555 00:07:00.555 ' 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.555 --rc genhtml_branch_coverage=1 00:07:00.555 --rc genhtml_function_coverage=1 00:07:00.555 --rc genhtml_legend=1 00:07:00.555 --rc geninfo_all_blocks=1 00:07:00.555 --rc geninfo_unexecuted_blocks=1 00:07:00.555 00:07:00.555 ' 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.555 --rc genhtml_branch_coverage=1 00:07:00.555 --rc genhtml_function_coverage=1 00:07:00.555 --rc genhtml_legend=1 00:07:00.555 --rc geninfo_all_blocks=1 00:07:00.555 --rc geninfo_unexecuted_blocks=1 00:07:00.555 00:07:00.555 ' 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.555 --rc genhtml_branch_coverage=1 00:07:00.555 --rc genhtml_function_coverage=1 00:07:00.555 --rc genhtml_legend=1 00:07:00.555 --rc geninfo_all_blocks=1 00:07:00.555 --rc geninfo_unexecuted_blocks=1 00:07:00.555 00:07:00.555 ' 00:07:00.555 23:40:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.555 23:40:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.555 ************************************ 00:07:00.555 START TEST thread_poller_perf 00:07:00.555 ************************************ 00:07:00.555 23:40:26 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.555 [2024-11-09 23:40:26.577164] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:07:00.555 [2024-11-09 23:40:26.577285] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336159 ] 00:07:00.555 [2024-11-09 23:40:26.723566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.814 [2024-11-09 23:40:26.861917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.814 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:02.196 [2024-11-09T22:40:28.397Z] ====================================== 00:07:02.196 [2024-11-09T22:40:28.397Z] busy:2719245517 (cyc) 00:07:02.196 [2024-11-09T22:40:28.397Z] total_run_count: 280000 00:07:02.196 [2024-11-09T22:40:28.397Z] tsc_hz: 2700000000 (cyc) 00:07:02.196 [2024-11-09T22:40:28.397Z] ====================================== 00:07:02.196 [2024-11-09T22:40:28.397Z] poller_cost: 9711 (cyc), 3596 (nsec) 00:07:02.196 00:07:02.196 real 0m1.580s 00:07:02.196 user 0m1.421s 00:07:02.196 sys 0m0.150s 00:07:02.196 23:40:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.196 23:40:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.196 ************************************ 00:07:02.196 END TEST thread_poller_perf 00:07:02.196 ************************************ 00:07:02.196 23:40:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.196 23:40:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:02.196 23:40:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.196 23:40:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.197 ************************************ 00:07:02.197 START TEST thread_poller_perf 00:07:02.197 ************************************ 00:07:02.197 23:40:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.197 [2024-11-09 23:40:28.211623] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:07:02.197 [2024-11-09 23:40:28.211767] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336438 ] 00:07:02.197 [2024-11-09 23:40:28.371289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.457 [2024-11-09 23:40:28.509828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.457 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.835 [2024-11-09T22:40:30.036Z] ====================================== 00:07:03.835 [2024-11-09T22:40:30.037Z] busy:2705183817 (cyc) 00:07:03.836 [2024-11-09T22:40:30.037Z] total_run_count: 3625000 00:07:03.836 [2024-11-09T22:40:30.037Z] tsc_hz: 2700000000 (cyc) 00:07:03.836 [2024-11-09T22:40:30.037Z] ====================================== 00:07:03.836 [2024-11-09T22:40:30.037Z] poller_cost: 746 (cyc), 276 (nsec) 00:07:03.836 00:07:03.836 real 0m1.597s 00:07:03.836 user 0m1.425s 00:07:03.836 sys 0m0.162s 00:07:03.836 23:40:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.836 23:40:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.836 ************************************ 00:07:03.836 END TEST thread_poller_perf 00:07:03.836 ************************************ 00:07:03.836 23:40:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.836 00:07:03.836 real 0m3.415s 00:07:03.836 user 0m2.982s 00:07:03.836 sys 0m0.428s 00:07:03.836 23:40:29 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.836 23:40:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.836 ************************************ 00:07:03.836 END TEST thread 00:07:03.836 ************************************ 00:07:03.836 23:40:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:03.836 23:40:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.836 23:40:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.836 23:40:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.836 23:40:29 -- common/autotest_common.sh@10 -- # set +x 00:07:03.836 ************************************ 00:07:03.836 START TEST app_cmdline 00:07:03.836 ************************************ 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.836 * Looking for test storage... 00:07:03.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.836 23:40:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:03.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.836 --rc genhtml_branch_coverage=1 00:07:03.836 --rc genhtml_function_coverage=1 00:07:03.836 --rc genhtml_legend=1 00:07:03.836 --rc geninfo_all_blocks=1 00:07:03.836 --rc geninfo_unexecuted_blocks=1 00:07:03.836 00:07:03.836 ' 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:03.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.836 --rc genhtml_branch_coverage=1 00:07:03.836 --rc genhtml_function_coverage=1 00:07:03.836 --rc genhtml_legend=1 00:07:03.836 --rc geninfo_all_blocks=1 00:07:03.836 --rc geninfo_unexecuted_blocks=1 00:07:03.836 00:07:03.836 ' 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:03.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.836 --rc genhtml_branch_coverage=1 00:07:03.836 --rc genhtml_function_coverage=1 00:07:03.836 --rc genhtml_legend=1 00:07:03.836 --rc geninfo_all_blocks=1 00:07:03.836 --rc geninfo_unexecuted_blocks=1 00:07:03.836 00:07:03.836 ' 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:03.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.836 --rc genhtml_branch_coverage=1 00:07:03.836 --rc genhtml_function_coverage=1 00:07:03.836 --rc genhtml_legend=1 00:07:03.836 --rc geninfo_all_blocks=1 00:07:03.836 --rc geninfo_unexecuted_blocks=1 00:07:03.836 00:07:03.836 ' 00:07:03.836 23:40:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.836 23:40:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3336691 00:07:03.836 23:40:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.836 23:40:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3336691 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3336691 ']' 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.836 23:40:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.095 [2024-11-09 23:40:30.091066] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:07:04.095 [2024-11-09 23:40:30.091212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336691 ] 00:07:04.095 [2024-11-09 23:40:30.237613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.354 [2024-11-09 23:40:30.372506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.289 23:40:31 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.289 23:40:31 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:05.289 23:40:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:05.547 { 00:07:05.547 "version": "SPDK v25.01-pre git sha1 06bc8ce53", 00:07:05.547 "fields": { 00:07:05.547 "major": 25, 00:07:05.547 "minor": 1, 00:07:05.547 "patch": 0, 00:07:05.547 "suffix": "-pre", 00:07:05.547 "commit": "06bc8ce53" 00:07:05.547 } 00:07:05.547 } 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.547 23:40:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:05.547 23:40:31 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.807 request: 00:07:05.807 { 00:07:05.807 "method": "env_dpdk_get_mem_stats", 00:07:05.807 "req_id": 1 00:07:05.807 } 00:07:05.807 Got JSON-RPC error response 00:07:05.807 response: 00:07:05.807 { 00:07:05.807 "code": -32601, 00:07:05.807 "message": "Method not found" 00:07:05.807 } 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.067 23:40:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3336691 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3336691 ']' 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3336691 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3336691 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3336691' 00:07:06.067 killing process with pid 3336691 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@971 -- # kill 3336691 00:07:06.067 23:40:32 app_cmdline -- common/autotest_common.sh@976 -- # wait 3336691 00:07:08.605 00:07:08.605 real 0m4.624s 00:07:08.605 user 0m5.172s 00:07:08.605 sys 0m0.717s 00:07:08.605 23:40:34 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.605 23:40:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.605 ************************************ 00:07:08.605 END TEST app_cmdline 00:07:08.605 ************************************ 00:07:08.605 23:40:34 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.605 23:40:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:08.605 23:40:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.605 23:40:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.605 ************************************ 00:07:08.605 START TEST version 00:07:08.605 ************************************ 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.605 * Looking for test storage... 00:07:08.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.605 23:40:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.605 23:40:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.605 23:40:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.605 23:40:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.605 23:40:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.605 23:40:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.605 23:40:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.605 23:40:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.605 23:40:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.605 23:40:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.605 23:40:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.605 23:40:34 version -- scripts/common.sh@344 -- # case "$op" in 00:07:08.605 23:40:34 version -- scripts/common.sh@345 -- # : 1 00:07:08.605 23:40:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.605 23:40:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.605 23:40:34 version -- scripts/common.sh@365 -- # decimal 1 00:07:08.605 23:40:34 version -- scripts/common.sh@353 -- # local d=1 00:07:08.605 23:40:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.605 23:40:34 version -- scripts/common.sh@355 -- # echo 1 00:07:08.605 23:40:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.605 23:40:34 version -- scripts/common.sh@366 -- # decimal 2 00:07:08.605 23:40:34 version -- scripts/common.sh@353 -- # local d=2 00:07:08.605 23:40:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.605 23:40:34 version -- scripts/common.sh@355 -- # echo 2 00:07:08.605 23:40:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.605 23:40:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.605 23:40:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.605 23:40:34 version -- scripts/common.sh@368 -- # return 0 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.605 --rc genhtml_branch_coverage=1 00:07:08.605 --rc genhtml_function_coverage=1 00:07:08.605 --rc genhtml_legend=1 00:07:08.605 --rc geninfo_all_blocks=1 00:07:08.605 --rc geninfo_unexecuted_blocks=1 00:07:08.605 00:07:08.605 ' 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.605 --rc genhtml_branch_coverage=1 00:07:08.605 --rc genhtml_function_coverage=1 00:07:08.605 --rc genhtml_legend=1 00:07:08.605 --rc geninfo_all_blocks=1 00:07:08.605 --rc geninfo_unexecuted_blocks=1 00:07:08.605 00:07:08.605 ' 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.605 --rc genhtml_branch_coverage=1 00:07:08.605 --rc genhtml_function_coverage=1 00:07:08.605 --rc genhtml_legend=1 00:07:08.605 --rc geninfo_all_blocks=1 00:07:08.605 --rc geninfo_unexecuted_blocks=1 00:07:08.605 00:07:08.605 ' 00:07:08.605 23:40:34 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.605 --rc genhtml_branch_coverage=1 00:07:08.605 --rc genhtml_function_coverage=1 00:07:08.605 --rc genhtml_legend=1 00:07:08.605 --rc geninfo_all_blocks=1 00:07:08.605 --rc geninfo_unexecuted_blocks=1 00:07:08.605 00:07:08.605 ' 00:07:08.605 23:40:34 version -- app/version.sh@17 -- # get_header_version major 00:07:08.605 23:40:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # cut -f2 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.605 23:40:34 version -- app/version.sh@17 -- # major=25 00:07:08.605 23:40:34 version -- app/version.sh@18 -- # get_header_version minor 00:07:08.605 23:40:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # cut -f2 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.605 23:40:34 version -- app/version.sh@18 -- # minor=1 00:07:08.605 23:40:34 version -- app/version.sh@19 -- # get_header_version patch 00:07:08.605 23:40:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # cut -f2 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.605 23:40:34 version -- app/version.sh@19 -- # patch=0 00:07:08.605 23:40:34 version -- app/version.sh@20 -- # get_header_version suffix 00:07:08.605 23:40:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # cut -f2 00:07:08.605 23:40:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.606 23:40:34 version -- app/version.sh@20 -- # suffix=-pre 00:07:08.606 23:40:34 version -- app/version.sh@22 -- # version=25.1 00:07:08.606 23:40:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.606 23:40:34 version -- app/version.sh@28 -- # version=25.1rc0 00:07:08.606 23:40:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.606 23:40:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.606 23:40:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:08.606 23:40:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:08.606 00:07:08.606 real 0m0.193s 00:07:08.606 user 0m0.126s 00:07:08.606 sys 0m0.093s 00:07:08.606 23:40:34 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.606 23:40:34 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.606 ************************************ 00:07:08.606 END TEST version 00:07:08.606 ************************************ 00:07:08.606 23:40:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:08.606 23:40:34 -- spdk/autotest.sh@194 -- # uname -s 00:07:08.606 23:40:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:08.606 23:40:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.606 23:40:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.606 23:40:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:08.606 23:40:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.606 23:40:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.606 23:40:34 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:08.606 23:40:34 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:08.606 23:40:34 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.606 23:40:34 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.606 23:40:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.606 23:40:34 -- common/autotest_common.sh@10 -- # set +x 00:07:08.606 ************************************ 00:07:08.606 START TEST nvmf_tcp 00:07:08.606 ************************************ 00:07:08.606 23:40:34 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.865 * Looking for test storage... 00:07:08.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.865 23:40:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.865 --rc genhtml_branch_coverage=1 00:07:08.865 --rc genhtml_function_coverage=1 00:07:08.865 --rc genhtml_legend=1 00:07:08.865 --rc geninfo_all_blocks=1 00:07:08.865 --rc geninfo_unexecuted_blocks=1 00:07:08.865 00:07:08.865 ' 00:07:08.865 23:40:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.865 23:40:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.865 23:40:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.865 23:40:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 ************************************ 00:07:08.865 START TEST nvmf_target_core 00:07:08.865 ************************************ 00:07:08.865 23:40:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.865 * Looking for test storage... 00:07:08.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:08.865 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.865 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.865 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.125 --rc genhtml_branch_coverage=1 00:07:09.125 --rc genhtml_function_coverage=1 00:07:09.125 --rc genhtml_legend=1 00:07:09.125 --rc geninfo_all_blocks=1 00:07:09.125 --rc geninfo_unexecuted_blocks=1 00:07:09.125 00:07:09.125 ' 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.125 --rc genhtml_branch_coverage=1 00:07:09.125 --rc genhtml_function_coverage=1 00:07:09.125 --rc genhtml_legend=1 00:07:09.125 --rc geninfo_all_blocks=1 00:07:09.125 --rc geninfo_unexecuted_blocks=1 00:07:09.125 00:07:09.125 ' 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.125 --rc genhtml_branch_coverage=1 00:07:09.125 --rc genhtml_function_coverage=1 00:07:09.125 --rc genhtml_legend=1 00:07:09.125 --rc geninfo_all_blocks=1 00:07:09.125 --rc geninfo_unexecuted_blocks=1 00:07:09.125 00:07:09.125 ' 00:07:09.125 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.125 --rc genhtml_branch_coverage=1 00:07:09.125 --rc genhtml_function_coverage=1 00:07:09.126 --rc genhtml_legend=1 00:07:09.126 --rc geninfo_all_blocks=1 00:07:09.126 --rc geninfo_unexecuted_blocks=1 00:07:09.126 00:07:09.126 ' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.126 ************************************ 00:07:09.126 START TEST nvmf_abort 00:07:09.126 ************************************ 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:09.126 * Looking for test storage... 00:07:09.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.126 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.127 --rc genhtml_branch_coverage=1 00:07:09.127 --rc genhtml_function_coverage=1 00:07:09.127 --rc genhtml_legend=1 00:07:09.127 --rc geninfo_all_blocks=1 00:07:09.127 --rc geninfo_unexecuted_blocks=1 00:07:09.127 00:07:09.127 ' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.127 --rc genhtml_branch_coverage=1 00:07:09.127 --rc genhtml_function_coverage=1 00:07:09.127 --rc genhtml_legend=1 00:07:09.127 --rc geninfo_all_blocks=1 00:07:09.127 --rc geninfo_unexecuted_blocks=1 00:07:09.127 00:07:09.127 ' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.127 --rc genhtml_branch_coverage=1 00:07:09.127 --rc genhtml_function_coverage=1 00:07:09.127 --rc genhtml_legend=1 00:07:09.127 --rc geninfo_all_blocks=1 00:07:09.127 --rc geninfo_unexecuted_blocks=1 00:07:09.127 00:07:09.127 ' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.127 --rc genhtml_branch_coverage=1 00:07:09.127 --rc genhtml_function_coverage=1 00:07:09.127 --rc genhtml_legend=1 00:07:09.127 --rc geninfo_all_blocks=1 00:07:09.127 --rc geninfo_unexecuted_blocks=1 00:07:09.127 00:07:09.127 ' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.127 23:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.669 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:11.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:11.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:11.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:11.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:07:11.670 00:07:11.670 --- 10.0.0.2 ping statistics --- 00:07:11.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.670 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:07:11.670 00:07:11.670 --- 10.0.0.1 ping statistics --- 00:07:11.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.670 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3339127 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3339127 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3339127 ']' 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.670 23:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.671 [2024-11-09 23:40:37.534161] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:07:11.671 [2024-11-09 23:40:37.534294] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.671 [2024-11-09 23:40:37.689778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.671 [2024-11-09 23:40:37.835013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.671 [2024-11-09 23:40:37.835090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.671 [2024-11-09 23:40:37.835117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.671 [2024-11-09 23:40:37.835141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.671 [2024-11-09 23:40:37.835161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.671 [2024-11-09 23:40:37.837833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.671 [2024-11-09 23:40:37.837899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.671 [2024-11-09 23:40:37.837904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 [2024-11-09 23:40:38.568312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 Malloc0 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 Delay0 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 [2024-11-09 23:40:38.692597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.608 23:40:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:12.866 [2024-11-09 23:40:38.911724] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:15.399 Initializing NVMe Controllers 00:07:15.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.399 controller IO queue size 128 less than required 00:07:15.399 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:15.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:15.399 Initialization complete. Launching workers. 00:07:15.399 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22641 00:07:15.399 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22698, failed to submit 66 00:07:15.399 success 22641, unsuccessful 57, failed 0 00:07:15.399 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.399 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.399 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.399 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.400 rmmod nvme_tcp 00:07:15.400 rmmod nvme_fabrics 00:07:15.400 rmmod nvme_keyring 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3339127 ']' 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3339127 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3339127 ']' 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3339127 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3339127 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3339127' 00:07:15.400 killing process with pid 3339127 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3339127 00:07:15.400 23:40:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3339127 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.335 23:40:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.872 00:07:18.872 real 0m9.333s 00:07:18.872 user 0m15.954s 00:07:18.872 sys 0m2.725s 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 ************************************ 00:07:18.872 END TEST nvmf_abort 00:07:18.872 ************************************ 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 ************************************ 00:07:18.872 START TEST nvmf_ns_hotplug_stress 00:07:18.872 ************************************ 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:18.872 * Looking for test storage... 00:07:18.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.872 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.873 --rc genhtml_branch_coverage=1 00:07:18.873 --rc genhtml_function_coverage=1 00:07:18.873 --rc genhtml_legend=1 00:07:18.873 --rc geninfo_all_blocks=1 00:07:18.873 --rc geninfo_unexecuted_blocks=1 00:07:18.873 00:07:18.873 ' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.873 --rc genhtml_branch_coverage=1 00:07:18.873 --rc genhtml_function_coverage=1 00:07:18.873 --rc genhtml_legend=1 00:07:18.873 --rc geninfo_all_blocks=1 00:07:18.873 --rc geninfo_unexecuted_blocks=1 00:07:18.873 00:07:18.873 ' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.873 --rc genhtml_branch_coverage=1 00:07:18.873 --rc genhtml_function_coverage=1 00:07:18.873 --rc genhtml_legend=1 00:07:18.873 --rc geninfo_all_blocks=1 00:07:18.873 --rc geninfo_unexecuted_blocks=1 00:07:18.873 00:07:18.873 ' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.873 --rc genhtml_branch_coverage=1 00:07:18.873 --rc genhtml_function_coverage=1 00:07:18.873 --rc genhtml_legend=1 00:07:18.873 --rc geninfo_all_blocks=1 00:07:18.873 --rc geninfo_unexecuted_blocks=1 00:07:18.873 00:07:18.873 ' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.873 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.874 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.874 23:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:20.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:20.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:20.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:20.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:20.777 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:20.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:07:20.778 00:07:20.778 --- 10.0.0.2 ping statistics --- 00:07:20.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.778 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:07:20.778 00:07:20.778 --- 10.0.0.1 ping statistics --- 00:07:20.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.778 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3341630 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3341630 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3341630 ']' 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:20.778 23:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:21.036 [2024-11-09 23:40:46.999515] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:07:21.036 [2024-11-09 23:40:46.999693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.036 [2024-11-09 23:40:47.148742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.294 [2024-11-09 23:40:47.287190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.294 [2024-11-09 23:40:47.287264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.294 [2024-11-09 23:40:47.287296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.294 [2024-11-09 23:40:47.287325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.294 [2024-11-09 23:40:47.287345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.294 [2024-11-09 23:40:47.290077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.294 [2024-11-09 23:40:47.290170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.294 [2024-11-09 23:40:47.290190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:21.859 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:22.118 [2024-11-09 23:40:48.307078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.375 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:22.633 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.891 [2024-11-09 23:40:48.841050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.891 23:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.148 23:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:23.406 Malloc0 00:07:23.406 23:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:23.667 Delay0 00:07:23.667 23:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.930 23:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:24.188 NULL1 00:07:24.188 23:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:24.471 23:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3342187 00:07:24.471 23:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:24.471 23:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:24.472 23:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.874 Read completed with error (sct=0, sc=11) 00:07:25.874 23:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.132 23:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:26.132 23:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:26.391 true 00:07:26.391 23:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:26.391 23:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.958 23:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.488 [2024-11-09 23:40:53.471276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.471944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.472987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.473954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.474935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.488 [2024-11-09 23:40:53.475013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.475941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.476802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.477926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.478921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.479922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.480947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.481032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:27.489 [2024-11-09 23:40:53.481711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.481797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.481890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.481967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.482944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.483948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.484030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.484111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.484193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.489 [2024-11-09 23:40:53.484284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.484961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.485966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.486943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.487988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.488758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.489612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.489696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.489787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.489876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.489957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.490991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.491923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.492924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.490 [2024-11-09 23:40:53.493682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.493766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.493850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.493929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.494966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.495920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.496976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.497998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.498957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 23:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:27.491 [2024-11-09 23:40:53.499217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 23:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:27.491 [2024-11-09 23:40:53.499595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.499932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.500703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.501976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.491 [2024-11-09 23:40:53.502786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.502873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.502969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.503949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.504925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.505972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.506903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.507905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.507995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.508958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.509995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.510931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.492 [2024-11-09 23:40:53.511791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.511872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.511983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.512932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.513996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.514975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.515938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.516950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.517929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.518963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.519064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.520130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.520231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.520314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.520394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.520477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.493 [2024-11-09 23:40:53.520555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.520662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.520745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.520825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.520914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.520994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.521955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.522980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.523920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.524932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.525930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.526994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.527072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.527144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.527750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.527835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.527936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.528926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.494 [2024-11-09 23:40:53.529902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.529984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.530929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.531956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.532999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.533922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.534986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.535943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.536938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.537015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.537093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.537169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.537251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.537332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.537415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.538934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.539041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.539121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.539199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.539276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.539355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.495 [2024-11-09 23:40:53.539432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.539517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.539619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.539700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.539780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.539862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.539957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.540976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.541994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.542988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.543992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.544959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.545071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.545150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.545230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.545312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.545948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.546943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.547974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.548071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.548149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.548226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.548315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.548392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.496 [2024-11-09 23:40:53.548468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.548545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.548630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.548716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.548794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.548872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.548950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.549973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.550959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.551944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.552976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.553994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.554950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.555533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.556501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.556611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.556695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.556786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.497 [2024-11-09 23:40:53.556881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.556985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.557900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.558941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.559991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.560931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.561954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.562976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.563992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.564960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.565041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.565842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.565940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.566041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.566123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.566221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.566303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.498 [2024-11-09 23:40:53.566404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.566996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.567933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.568942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.569980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.570968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.571965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.572938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.573940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.574023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.574779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:27.499 [2024-11-09 23:40:53.574861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.574952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.499 [2024-11-09 23:40:53.575802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.575884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.575964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.576996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.577937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.578992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.579934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.580917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.581968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.582960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.583050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.583128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.583205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.583300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.584930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.585034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.585121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.500 [2024-11-09 23:40:53.585202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.585929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.586932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.587997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.588933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.589931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.590998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.591957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.592057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.592151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.592232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.592314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.592399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.592495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.593978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.501 [2024-11-09 23:40:53.594583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.594699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.594779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.594864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.594959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.595963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.596987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.597948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.598944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.599994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.600950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.601810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.602960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.603038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.502 [2024-11-09 23:40:53.603118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.603915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.604923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.605942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.606953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.607881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.608916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.609967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.610660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.611666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.611748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.611828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.611908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.611990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.503 [2024-11-09 23:40:53.612797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.612882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.612979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.613988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.614918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.615960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.616936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.617949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.618966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.619999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.620098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.620186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.621930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.622018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.504 [2024-11-09 23:40:53.622097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.622948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.623945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.624937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.625932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.626934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.627959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.628967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.629057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.629155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.629245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.629331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.630998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.505 [2024-11-09 23:40:53.631561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.631663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.631745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.631824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.631927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.632949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.633940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.634997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.635985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.636998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.637962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.638693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.639518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.639647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.639729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.639810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.506 [2024-11-09 23:40:53.639892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.639972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.640915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.641927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.642952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.643938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.644940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.645919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.646991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.647737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.507 [2024-11-09 23:40:53.648898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.649957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.650935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.651978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.652978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.653943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.654935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.655992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.656624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.657525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.657653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.657750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.657833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.657914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.657995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.658082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.658178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.508 [2024-11-09 23:40:53.658257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.658993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.659967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.660985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.661990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.662901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.663991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.664997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.665769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.666965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.667065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.667158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.667240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.667319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.667402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.509 [2024-11-09 23:40:53.667492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.667602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.667701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.667791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.667873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.667965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.668985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.669929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.670957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.671968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.672848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:27.510 [2024-11-09 23:40:53.672950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.673917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.674777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.675606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.675694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.675777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.675879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.675991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.510 [2024-11-09 23:40:53.676073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.676944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.677958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.678930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.679978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.680986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.681938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.682971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.683958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.684051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.684980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.685069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.685158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.789 [2024-11-09 23:40:53.685254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.685957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.686969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.687951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.688927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.689929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.690996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.691955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.692987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.693938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.694017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.790 [2024-11-09 23:40:53.694097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.694953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.695996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.696079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.697937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.698987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.699963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.700983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.701919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.702955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.791 [2024-11-09 23:40:53.703747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.703836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.703936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.704988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.705952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.706945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.707968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.708046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.708123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.708200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.708278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.708351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.709993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.710992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.711969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.712936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.713022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.713096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.792 [2024-11-09 23:40:53.713220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.713956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.714908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.715926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.716997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.717996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.718958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.719975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.720642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.793 [2024-11-09 23:40:53.721840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.721924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.722935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.723987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.724901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.725922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.726931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.727009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.727092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.727174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.727253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.727331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.727410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.728972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.729969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.730927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.731009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.731088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.794 [2024-11-09 23:40:53.731165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.731937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.732944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.733920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.734955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.735060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.735143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.735224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.735854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.735939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.736998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.737922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.738927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.739955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.795 [2024-11-09 23:40:53.740607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.740699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.740780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.740854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.740948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.741985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.742949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.743991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.744944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.745999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.746934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.747017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.747095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.747177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.747800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.747900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.747997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.748968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.749988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.796 [2024-11-09 23:40:53.750089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.750989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.751925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.752923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.753001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.753082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.753182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.753298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.753407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.754485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.754571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.754669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.754755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.754838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.754919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.755967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.756958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.757992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.758928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.797 [2024-11-09 23:40:53.759488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.759567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.759670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.759750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.759833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.760959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.761970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.762939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.763916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.764986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.765804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.766796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.766899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.766980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.767922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.768021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.768115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.768196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.798 [2024-11-09 23:40:53.768279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.768923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.769907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.770908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.771943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:27.799 [2024-11-09 23:40:53.772545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.772969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.773885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.774510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.774621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.774702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.774780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.774868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.774941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.775999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.776984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.799 [2024-11-09 23:40:53.777528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.777636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.777724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.777813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.777921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.778930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.779957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.780036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.780749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.780833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.780931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.781961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.782936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.783980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.784888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.785990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.786982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.800 [2024-11-09 23:40:53.787061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.787947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.788535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.788636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.788720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.788802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.788884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.788979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.789945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.790876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.791970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.792979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.793918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.794931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.795955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.796038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.796134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.796214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.796294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.796372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.801 [2024-11-09 23:40:53.796479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.796564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.796657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.796737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.796821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.796913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.796996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.797901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.798939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.799796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.800653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.800739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.800820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.800908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.800993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.801938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.802942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.803938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.802 [2024-11-09 23:40:53.804681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.804761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.804843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.804940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.805986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.806067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.806157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 true 00:07:27.803 [2024-11-09 23:40:53.807611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.807943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.808991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.809948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.810955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.811970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.812699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.813954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.803 [2024-11-09 23:40:53.814963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.815908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.816906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.817921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.818867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.819946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.820966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.821963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.822977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.804 [2024-11-09 23:40:53.823741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 23:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:27.805 [2024-11-09 23:40:53.823829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.823911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 23:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.805 [2024-11-09 23:40:53.823995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.824106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.824198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.824277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.824357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.824442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.824527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.825570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.825673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.825751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.825861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.825942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.826943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.827968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.828967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.829959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.830962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.831974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.832961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.833040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.833137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.833219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.805 [2024-11-09 23:40:53.833305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.833963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.834930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.835998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.836901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.837009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.837100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.838977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.839935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.840875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.841976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.842943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.843036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.843120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.843242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.843327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.806 [2024-11-09 23:40:53.843423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.843524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.843639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.843727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.843804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.843926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.844951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.845736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.846980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.847997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.848952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.849938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.850927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.851914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.807 [2024-11-09 23:40:53.852013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.852946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.853935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.854928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.855987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.856959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.857968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.858052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.858136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.858219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.859959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.860914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.861928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.862009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.862104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.808 [2024-11-09 23:40:53.862198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.862931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.863953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.864954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.865902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.866828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.867371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:27.809 [2024-11-09 23:40:53.867464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.867551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.867675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.867758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.867841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.867934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.868929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.869970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.870980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.809 [2024-11-09 23:40:53.871917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.871999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.872990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.873942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.874994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.875926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.876966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.877045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.877130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.877206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.877298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.877390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.878915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.879983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.880994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.881076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.881158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.881242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.881334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.810 [2024-11-09 23:40:53.881423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.881505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.881604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.881685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.881767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.881853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.881959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.882936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.883753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.884973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.885054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.885146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.885239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.885321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.885402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.886995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.887947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.888920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.889985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.890071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.890150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.811 [2024-11-09 23:40:53.890265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.890930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.891995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.892932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.893969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.894994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.895974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.896966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.897761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.898960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.812 [2024-11-09 23:40:53.899899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.899979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.900982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.901937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.902972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.903923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.904030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.904116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.905982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.906971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.907943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.908962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.813 [2024-11-09 23:40:53.909834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.909918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.910801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.911992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.912982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.913932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.914989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.915993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.916970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.917945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.918866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.919463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.919550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.919647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.919740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.814 [2024-11-09 23:40:53.919835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.919919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.920926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.921923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.922925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.923922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.924980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.925932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.926942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.927968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.815 [2024-11-09 23:40:53.928813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.928896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.928995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.929084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.929176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.929268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.929361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.930985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.931995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.932916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.933975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.934946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.935923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.936922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.816 [2024-11-09 23:40:53.937555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.937657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.937735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.937815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.937903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.938917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.939012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.939096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.939189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.939294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.940965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.941918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.942913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.943916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.944956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.945968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.946984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.947061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.817 [2024-11-09 23:40:53.947142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.947963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.948048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.948130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.948213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.948291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.949979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.950916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.951917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.952956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.953974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.954933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.955967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.818 [2024-11-09 23:40:53.956566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.956661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.956746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.956823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.956928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.957612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.958982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.959968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.960990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.961928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.962955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.963878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.964973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:27.819 [2024-11-09 23:40:53.965052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.819 [2024-11-09 23:40:53.965892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.965977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.966995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.967922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.968836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.968976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.969921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:27.820 [2024-11-09 23:40:53.970506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.970601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.970685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.970770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.970863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.970961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.971918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.972987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.973921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.974983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.975081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.975162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.975242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.975325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.975408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.087 [2024-11-09 23:40:53.975491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.975582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.975685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.975771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.975851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.975943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.976029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.976136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.976232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.976852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.976965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.977997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.978963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.979935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.980977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.981919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.982939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.983950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.088 [2024-11-09 23:40:53.984875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.984959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.985913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.986998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.987931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.988026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.988107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.988209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.989931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.990955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.991960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.992926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.993976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.089 [2024-11-09 23:40:53.994553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.994645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.994732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.994967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.995891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.996009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.996101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.996185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.996269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.996356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.997967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.998998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:53.999939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.000947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.001981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.002066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.002157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.002238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.002330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.002421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.003981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.004062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.004147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.004231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.004315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.004396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.090 [2024-11-09 23:40:54.004494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.004576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.004670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.004757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.004838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.004944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.005940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.006938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.007935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.008938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.009937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.010021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.010120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.010725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.010809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.010890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.010975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.011989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.012079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.012162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.012244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.091 [2024-11-09 23:40:54.012330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.012999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.013946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.014921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.015994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.016987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.017567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.018982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.019931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.020942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.021023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.021106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.021196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.092 [2024-11-09 23:40:54.021274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.021954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.022991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.023928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.024937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.025953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.026954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.027799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.028883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.028966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.029978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.093 [2024-11-09 23:40:54.030739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.030820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.030904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.030988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.031916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.032985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.033978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.034968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.035736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.036961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.037933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.038937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.094 [2024-11-09 23:40:54.039780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.039863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.039947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.040965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.041946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.042950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.043940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.044953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.045856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.046912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.047949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.048942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.049026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.049110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.049192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.049275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.095 [2024-11-09 23:40:54.049355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.049987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.050917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.051999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.052921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.053869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.054461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.054571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.054662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.054774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.054861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.054947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.055965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.056919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.096 [2024-11-09 23:40:54.057683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.057763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.057847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.057942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.058976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.059967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.060927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:28.097 [2024-11-09 23:40:54.061094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.061970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.062999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.063907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.064026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.064109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.064209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.064292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.064372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.064456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.065548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.065659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.065749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.065833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.065922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.066960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.097 [2024-11-09 23:40:54.067524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.067629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.067710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.067794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.067894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.067972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.068915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.069919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.070942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.071969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.072656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.073907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.074997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.075927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.098 [2024-11-09 23:40:54.076836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.076930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.077958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.078764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.079952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.080925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.081942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.082965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.083047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.083744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.083825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.083949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.084920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.085981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.086062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.086139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.086219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.086299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.099 [2024-11-09 23:40:54.086377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.086991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.087994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.088974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.089972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.090753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.092958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.093924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.094991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.095067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.100 [2024-11-09 23:40:54.095143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.095971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.096997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.097977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.098913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.099959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.100962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 [2024-11-09 23:40:54.101689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:28.101 23:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.359 23:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:28.359 23:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:28.617 true 00:07:28.617 23:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:28.617 23:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.875 23:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.134 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:29.134 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:29.391 true 00:07:29.391 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:29.391 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.648 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.905 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:29.905 23:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:30.163 true 00:07:30.163 23:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:30.163 23:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.097 23:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.355 23:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:31.355 23:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:31.613 true 00:07:31.613 23:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:31.613 23:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.180 23:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.180 23:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:32.180 23:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:32.438 true 00:07:32.438 23:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:32.438 23:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.696 23:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.954 23:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:32.954 23:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:33.212 true 00:07:33.470 23:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:33.470 23:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.404 23:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.662 23:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:34.662 23:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:34.920 true 00:07:34.920 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:34.920 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.178 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.436 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:35.436 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:35.695 true 00:07:35.695 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:35.695 23:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.952 23:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.210 23:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:36.210 23:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:36.469 true 00:07:36.469 23:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:36.469 23:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.844 23:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.844 23:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:37.844 23:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:38.102 true 00:07:38.102 23:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:38.102 23:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.361 23:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.618 23:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:38.618 23:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:38.876 true 00:07:38.876 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:38.876 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.443 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.443 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:39.443 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:39.701 true 00:07:39.701 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:39.701 23:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.637 23:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.895 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:40.895 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:41.153 true 00:07:41.153 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:41.153 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.411 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.670 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:41.670 23:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:41.928 true 00:07:41.928 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:41.928 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.186 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.444 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:42.444 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:42.702 true 00:07:42.702 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:42.702 23:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.076 23:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.076 23:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:44.076 23:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:44.334 true 00:07:44.334 23:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:44.334 23:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.592 23:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.849 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:44.849 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:45.107 true 00:07:45.107 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:45.107 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.365 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.930 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:45.930 23:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:45.930 true 00:07:45.930 23:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:45.930 23:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.862 23:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.121 23:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:47.121 23:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:47.379 true 00:07:47.379 23:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:47.379 23:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.637 23:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.896 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:47.896 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:48.154 true 00:07:48.154 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:48.154 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.720 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.720 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:48.720 23:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:48.978 true 00:07:48.978 23:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:48.978 23:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.914 23:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.172 23:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:50.172 23:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:50.431 true 00:07:50.431 23:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:50.431 23:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.689 23:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.977 23:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:50.977 23:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:51.262 true 00:07:51.262 23:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:51.262 23:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.197 23:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.455 23:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:52.456 23:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:52.714 true 00:07:52.714 23:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:52.714 23:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.972 23:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.230 23:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:53.230 23:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:53.489 true 00:07:53.489 23:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:53.489 23:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.421 23:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.680 23:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:54.680 23:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:54.680 Initializing NVMe Controllers 00:07:54.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.680 Controller IO queue size 128, less than required. 00:07:54.680 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.680 Controller IO queue size 128, less than required. 00:07:54.680 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:54.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:54.680 Initialization complete. Launching workers. 00:07:54.680 ======================================================== 00:07:54.680 Latency(us) 00:07:54.680 Device Information : IOPS MiB/s Average min max 00:07:54.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1040.63 0.51 55475.02 4267.36 1017497.85 00:07:54.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7082.74 3.46 18073.17 4398.59 478471.79 00:07:54.680 ======================================================== 00:07:54.680 Total : 8123.37 3.97 22864.46 4267.36 1017497.85 00:07:54.680 00:07:54.938 true 00:07:54.938 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3342187 00:07:54.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3342187) - No such process 00:07:54.938 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3342187 00:07:54.938 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.196 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.454 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:55.455 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:55.455 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:55.455 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.455 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:55.712 null0 00:07:55.712 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.712 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.713 23:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:55.971 null1 00:07:55.971 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.971 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.971 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:56.229 null2 00:07:56.229 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.229 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.229 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:56.487 null3 00:07:56.487 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.487 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.487 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:56.745 null4 00:07:56.745 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.745 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.003 23:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:57.260 null5 00:07:57.260 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.260 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.260 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:57.518 null6 00:07:57.519 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.519 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.519 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:57.777 null7 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:57.777 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3346134 3346135 3346136 3346139 3346141 3346143 3346145 3346147 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.778 23:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.036 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.293 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.294 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.294 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.294 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.294 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.294 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.552 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.810 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.810 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.810 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.377 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.636 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.895 23:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.154 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.412 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.670 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.670 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.671 23:41:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.929 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.496 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.755 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.755 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.755 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.755 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.755 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.755 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.013 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.013 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.013 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.271 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.529 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.787 23:41:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.045 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.303 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.303 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.562 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.562 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.562 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.562 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.562 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.562 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.820 rmmod nvme_tcp 00:08:03.820 rmmod nvme_fabrics 00:08:03.820 rmmod nvme_keyring 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3341630 ']' 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3341630 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3341630 ']' 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3341630 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:08:03.820 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:03.821 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3341630 00:08:03.821 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:03.821 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:03.821 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3341630' 00:08:03.821 killing process with pid 3341630 00:08:03.821 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3341630 00:08:03.821 23:41:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3341630 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.194 23:41:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.098 00:08:07.098 real 0m48.616s 00:08:07.098 user 3m42.104s 00:08:07.098 sys 0m16.498s 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.098 ************************************ 00:08:07.098 END TEST nvmf_ns_hotplug_stress 00:08:07.098 ************************************ 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.098 ************************************ 00:08:07.098 START TEST nvmf_delete_subsystem 00:08:07.098 ************************************ 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.098 * Looking for test storage... 00:08:07.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.098 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.357 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.357 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.357 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.357 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.357 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.358 --rc genhtml_branch_coverage=1 00:08:07.358 --rc genhtml_function_coverage=1 00:08:07.358 --rc genhtml_legend=1 00:08:07.358 --rc geninfo_all_blocks=1 00:08:07.358 --rc geninfo_unexecuted_blocks=1 00:08:07.358 00:08:07.358 ' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.358 --rc genhtml_branch_coverage=1 00:08:07.358 --rc genhtml_function_coverage=1 00:08:07.358 --rc genhtml_legend=1 00:08:07.358 --rc geninfo_all_blocks=1 00:08:07.358 --rc geninfo_unexecuted_blocks=1 00:08:07.358 00:08:07.358 ' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.358 --rc genhtml_branch_coverage=1 00:08:07.358 --rc genhtml_function_coverage=1 00:08:07.358 --rc genhtml_legend=1 00:08:07.358 --rc geninfo_all_blocks=1 00:08:07.358 --rc geninfo_unexecuted_blocks=1 00:08:07.358 00:08:07.358 ' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.358 --rc genhtml_branch_coverage=1 00:08:07.358 --rc genhtml_function_coverage=1 00:08:07.358 --rc genhtml_legend=1 00:08:07.358 --rc geninfo_all_blocks=1 00:08:07.358 --rc geninfo_unexecuted_blocks=1 00:08:07.358 00:08:07.358 ' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.358 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.359 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.359 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.359 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.359 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.359 23:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:09.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:09.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:09.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:09.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.262 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:08:09.263 00:08:09.263 --- 10.0.0.2 ping statistics --- 00:08:09.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.263 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:09.263 00:08:09.263 --- 10.0.0.1 ping statistics --- 00:08:09.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.263 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.263 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3349161 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3349161 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3349161 ']' 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.522 23:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.522 [2024-11-09 23:41:35.583275] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:08:09.522 [2024-11-09 23:41:35.583407] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.780 [2024-11-09 23:41:35.730578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.780 [2024-11-09 23:41:35.866123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.780 [2024-11-09 23:41:35.866213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.780 [2024-11-09 23:41:35.866239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.780 [2024-11-09 23:41:35.866263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.780 [2024-11-09 23:41:35.866282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.780 [2024-11-09 23:41:35.868902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.780 [2024-11-09 23:41:35.868903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.347 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.347 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:08:10.347 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.347 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.347 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 [2024-11-09 23:41:36.568614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 [2024-11-09 23:41:36.585854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 NULL1 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 Delay0 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3349314 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:10.605 23:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:10.605 [2024-11-09 23:41:36.720821] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:12.504 23:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.504 23:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.504 23:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 [2024-11-09 23:41:38.900114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 starting I/O failed: -6 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Write completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.762 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 [2024-11-09 23:41:38.901656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Write completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 Read completed with error (sct=0, sc=8) 00:08:12.763 [2024-11-09 23:41:38.902376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:13.696 [2024-11-09 23:41:39.859489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 [2024-11-09 23:41:39.903380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 [2024-11-09 23:41:39.904136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Read completed with error (sct=0, sc=8) 00:08:13.954 Write completed with error (sct=0, sc=8) 00:08:13.954 [2024-11-09 23:41:39.905580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 Write completed with error (sct=0, sc=8) 00:08:13.955 Read completed with error (sct=0, sc=8) 00:08:13.955 [2024-11-09 23:41:39.905862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:08:13.955 Initializing NVMe Controllers 00:08:13.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.955 Controller IO queue size 128, less than required. 00:08:13.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:13.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:13.955 Initialization complete. Launching workers. 00:08:13.955 ======================================================== 00:08:13.955 Latency(us) 00:08:13.955 Device Information : IOPS MiB/s Average min max 00:08:13.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.73 0.09 888515.37 1018.79 1016687.19 00:08:13.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.41 0.08 929315.08 1106.08 1017285.81 00:08:13.955 ======================================================== 00:08:13.955 Total : 331.14 0.16 907786.98 1018.79 1017285.81 00:08:13.955 00:08:13.955 [2024-11-09 23:41:39.910721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:13.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:13.955 23:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.955 23:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:13.955 23:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3349314 00:08:13.955 23:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3349314 00:08:14.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3349314) - No such process 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3349314 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3349314 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3349314 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 [2024-11-09 23:41:40.432435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3349723 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:14.521 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.521 [2024-11-09 23:41:40.548610] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:14.779 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.779 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:14.779 23:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.344 23:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.344 23:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:15.344 23:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.910 23:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.910 23:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:15.910 23:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.475 23:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.475 23:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:16.475 23:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.040 23:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.040 23:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:17.040 23:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.298 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.298 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:17.298 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.865 Initializing NVMe Controllers 00:08:17.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.865 Controller IO queue size 128, less than required. 00:08:17.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:17.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:17.865 Initialization complete. Launching workers. 00:08:17.865 ======================================================== 00:08:17.865 Latency(us) 00:08:17.865 Device Information : IOPS MiB/s Average min max 00:08:17.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005459.22 1000375.64 1041226.47 00:08:17.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006453.88 1000238.17 1044208.12 00:08:17.865 ======================================================== 00:08:17.865 Total : 256.00 0.12 1005956.55 1000238.17 1044208.12 00:08:17.865 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3349723 00:08:17.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3349723) - No such process 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3349723 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.865 23:41:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.865 rmmod nvme_tcp 00:08:17.865 rmmod nvme_fabrics 00:08:17.865 rmmod nvme_keyring 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3349161 ']' 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3349161 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3349161 ']' 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3349161 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.865 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349161 00:08:18.124 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.124 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.124 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349161' 00:08:18.124 killing process with pid 3349161 00:08:18.124 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3349161 00:08:18.124 23:41:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3349161 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.059 23:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.597 00:08:21.597 real 0m14.041s 00:08:21.597 user 0m31.039s 00:08:21.597 sys 0m3.207s 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.597 ************************************ 00:08:21.597 END TEST nvmf_delete_subsystem 00:08:21.597 ************************************ 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.597 ************************************ 00:08:21.597 START TEST nvmf_host_management 00:08:21.597 ************************************ 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.597 * Looking for test storage... 00:08:21.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.597 --rc genhtml_branch_coverage=1 00:08:21.597 --rc genhtml_function_coverage=1 00:08:21.597 --rc genhtml_legend=1 00:08:21.597 --rc geninfo_all_blocks=1 00:08:21.597 --rc geninfo_unexecuted_blocks=1 00:08:21.597 00:08:21.597 ' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.597 --rc genhtml_branch_coverage=1 00:08:21.597 --rc genhtml_function_coverage=1 00:08:21.597 --rc genhtml_legend=1 00:08:21.597 --rc geninfo_all_blocks=1 00:08:21.597 --rc geninfo_unexecuted_blocks=1 00:08:21.597 00:08:21.597 ' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.597 --rc genhtml_branch_coverage=1 00:08:21.597 --rc genhtml_function_coverage=1 00:08:21.597 --rc genhtml_legend=1 00:08:21.597 --rc geninfo_all_blocks=1 00:08:21.597 --rc geninfo_unexecuted_blocks=1 00:08:21.597 00:08:21.597 ' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.597 --rc genhtml_branch_coverage=1 00:08:21.597 --rc genhtml_function_coverage=1 00:08:21.597 --rc genhtml_legend=1 00:08:21.597 --rc geninfo_all_blocks=1 00:08:21.597 --rc geninfo_unexecuted_blocks=1 00:08:21.597 00:08:21.597 ' 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.597 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.598 23:41:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:08:23.501 00:08:23.501 --- 10.0.0.2 ping statistics --- 00:08:23.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.501 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:23.501 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:23.502 00:08:23.502 --- 10.0.0.1 ping statistics --- 00:08:23.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.502 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3352246 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3352246 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3352246 ']' 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.502 23:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.760 [2024-11-09 23:41:49.739665] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:08:23.760 [2024-11-09 23:41:49.739817] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.760 [2024-11-09 23:41:49.887619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.032 [2024-11-09 23:41:50.032128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.032 [2024-11-09 23:41:50.032210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.032 [2024-11-09 23:41:50.032237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.032 [2024-11-09 23:41:50.032263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.032 [2024-11-09 23:41:50.032283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.032 [2024-11-09 23:41:50.035197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.032 [2024-11-09 23:41:50.035298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.032 [2024-11-09 23:41:50.035362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.032 [2024-11-09 23:41:50.035369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.691 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 [2024-11-09 23:41:50.719897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 Malloc0 00:08:24.692 [2024-11-09 23:41:50.855759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3352505 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3352505 /var/tmp/bdevperf.sock 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3352505 ']' 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.692 { 00:08:24.692 "params": { 00:08:24.692 "name": "Nvme$subsystem", 00:08:24.692 "trtype": "$TEST_TRANSPORT", 00:08:24.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.692 "adrfam": "ipv4", 00:08:24.692 "trsvcid": "$NVMF_PORT", 00:08:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.692 "hdgst": ${hdgst:-false}, 00:08:24.692 "ddgst": ${ddgst:-false} 00:08:24.692 }, 00:08:24.692 "method": "bdev_nvme_attach_controller" 00:08:24.692 } 00:08:24.692 EOF 00:08:24.692 )") 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:24.692 23:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.692 "params": { 00:08:24.692 "name": "Nvme0", 00:08:24.692 "trtype": "tcp", 00:08:24.692 "traddr": "10.0.0.2", 00:08:24.692 "adrfam": "ipv4", 00:08:24.692 "trsvcid": "4420", 00:08:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.692 "hdgst": false, 00:08:24.692 "ddgst": false 00:08:24.692 }, 00:08:24.692 "method": "bdev_nvme_attach_controller" 00:08:24.692 }' 00:08:24.950 [2024-11-09 23:41:50.971327] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:08:24.951 [2024-11-09 23:41:50.971468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352505 ] 00:08:24.951 [2024-11-09 23:41:51.117828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.209 [2024-11-09 23:41:51.247643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.776 Running I/O for 10 seconds... 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.776 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.036 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.036 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:08:26.036 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:08:26.036 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:26.036 23:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:26.036 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:26.036 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:26.036 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.036 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.036 [2024-11-09 23:41:52.003910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.003995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.036 [2024-11-09 23:41:52.004857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.036 [2024-11-09 23:41:52.004890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.004911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.004945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.004966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.004990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.005978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.005999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.037 [2024-11-09 23:41:52.006539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.037 [2024-11-09 23:41:52.006562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.006959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.006981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 [2024-11-09 23:41:52.007004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.038 [2024-11-09 23:41:52.007025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.038 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.038 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:26.038 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.038 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.038 [2024-11-09 23:41:52.008646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:26.038 task offset: 47232 on job bdev=Nvme0n1 fails 00:08:26.038 00:08:26.038 Latency(us) 00:08:26.038 [2024-11-09T22:41:52.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:26.038 Job: Nvme0n1 ended in about 0.28 seconds with error 00:08:26.038 Verification LBA range: start 0x0 length 0x400 00:08:26.038 Nvme0n1 : 0.28 1148.63 71.79 229.73 0.00 44535.85 4199.16 40972.14 00:08:26.038 [2024-11-09T22:41:52.239Z] =================================================================================================================== 00:08:26.038 [2024-11-09T22:41:52.239Z] Total : 1148.63 71.79 229.73 0.00 44535.85 4199.16 40972.14 00:08:26.038 [2024-11-09 23:41:52.013716] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.038 [2024-11-09 23:41:52.013767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:26.038 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.038 23:41:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:26.038 [2024-11-09 23:41:52.065297] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3352505 00:08:26.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3352505) - No such process 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.973 { 00:08:26.973 "params": { 00:08:26.973 "name": "Nvme$subsystem", 00:08:26.973 "trtype": "$TEST_TRANSPORT", 00:08:26.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.973 "adrfam": "ipv4", 00:08:26.973 "trsvcid": "$NVMF_PORT", 00:08:26.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.973 "hdgst": ${hdgst:-false}, 00:08:26.973 "ddgst": ${ddgst:-false} 00:08:26.973 }, 00:08:26.973 "method": "bdev_nvme_attach_controller" 00:08:26.973 } 00:08:26.973 EOF 00:08:26.973 )") 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:26.973 23:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.973 "params": { 00:08:26.973 "name": "Nvme0", 00:08:26.973 "trtype": "tcp", 00:08:26.973 "traddr": "10.0.0.2", 00:08:26.973 "adrfam": "ipv4", 00:08:26.973 "trsvcid": "4420", 00:08:26.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:26.973 "hdgst": false, 00:08:26.973 "ddgst": false 00:08:26.973 }, 00:08:26.973 "method": "bdev_nvme_attach_controller" 00:08:26.973 }' 00:08:26.973 [2024-11-09 23:41:53.107531] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:08:26.973 [2024-11-09 23:41:53.107710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352726 ] 00:08:27.231 [2024-11-09 23:41:53.243881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.231 [2024-11-09 23:41:53.373433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.798 Running I/O for 1 seconds... 00:08:29.174 1344.00 IOPS, 84.00 MiB/s 00:08:29.174 Latency(us) 00:08:29.174 [2024-11-09T22:41:55.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.174 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:29.174 Verification LBA range: start 0x0 length 0x400 00:08:29.174 Nvme0n1 : 1.04 1359.60 84.98 0.00 0.00 46276.58 8058.50 40777.96 00:08:29.174 [2024-11-09T22:41:55.375Z] =================================================================================================================== 00:08:29.174 [2024-11-09T22:41:55.375Z] Total : 1359.60 84.98 0.00 0.00 46276.58 8058.50 40777.96 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.741 rmmod nvme_tcp 00:08:29.741 rmmod nvme_fabrics 00:08:29.741 rmmod nvme_keyring 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3352246 ']' 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3352246 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3352246 ']' 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3352246 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:29.741 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3352246 00:08:30.000 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:30.000 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:30.000 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3352246' 00:08:30.000 killing process with pid 3352246 00:08:30.000 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3352246 00:08:30.000 23:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3352246 00:08:30.934 [2024-11-09 23:41:57.127197] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.193 23:41:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:33.096 00:08:33.096 real 0m11.974s 00:08:33.096 user 0m32.570s 00:08:33.096 sys 0m3.180s 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 ************************************ 00:08:33.096 END TEST nvmf_host_management 00:08:33.096 ************************************ 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.096 23:41:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.355 ************************************ 00:08:33.355 START TEST nvmf_lvol 00:08:33.355 ************************************ 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:33.355 * Looking for test storage... 00:08:33.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:33.355 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:33.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.356 --rc genhtml_branch_coverage=1 00:08:33.356 --rc genhtml_function_coverage=1 00:08:33.356 --rc genhtml_legend=1 00:08:33.356 --rc geninfo_all_blocks=1 00:08:33.356 --rc geninfo_unexecuted_blocks=1 00:08:33.356 00:08:33.356 ' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:33.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.356 --rc genhtml_branch_coverage=1 00:08:33.356 --rc genhtml_function_coverage=1 00:08:33.356 --rc genhtml_legend=1 00:08:33.356 --rc geninfo_all_blocks=1 00:08:33.356 --rc geninfo_unexecuted_blocks=1 00:08:33.356 00:08:33.356 ' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:33.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.356 --rc genhtml_branch_coverage=1 00:08:33.356 --rc genhtml_function_coverage=1 00:08:33.356 --rc genhtml_legend=1 00:08:33.356 --rc geninfo_all_blocks=1 00:08:33.356 --rc geninfo_unexecuted_blocks=1 00:08:33.356 00:08:33.356 ' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:33.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.356 --rc genhtml_branch_coverage=1 00:08:33.356 --rc genhtml_function_coverage=1 00:08:33.356 --rc genhtml_legend=1 00:08:33.356 --rc geninfo_all_blocks=1 00:08:33.356 --rc geninfo_unexecuted_blocks=1 00:08:33.356 00:08:33.356 ' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.356 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.357 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.357 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.357 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.357 23:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.258 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:35.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:35.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.516 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:35.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:35.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:35.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:08:35.517 00:08:35.517 --- 10.0.0.2 ping statistics --- 00:08:35.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.517 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:08:35.517 00:08:35.517 --- 10.0.0.1 ping statistics --- 00:08:35.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.517 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3355236 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3355236 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3355236 ']' 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.517 23:42:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.517 [2024-11-09 23:42:01.705952] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:08:35.517 [2024-11-09 23:42:01.706105] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.775 [2024-11-09 23:42:01.852940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.034 [2024-11-09 23:42:01.991290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.034 [2024-11-09 23:42:01.991365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.034 [2024-11-09 23:42:01.991391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.034 [2024-11-09 23:42:01.991417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.034 [2024-11-09 23:42:01.991438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.034 [2024-11-09 23:42:01.994167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.034 [2024-11-09 23:42:01.994233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.034 [2024-11-09 23:42:01.994237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.600 23:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:36.857 [2024-11-09 23:42:02.979073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.857 23:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:37.422 23:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:37.422 23:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:37.679 23:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:37.679 23:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:37.937 23:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:38.196 23:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cd93cb27-215c-481a-b4dc-12dc5bdf8f1e 00:08:38.196 23:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cd93cb27-215c-481a-b4dc-12dc5bdf8f1e lvol 20 00:08:38.453 23:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=54af0df0-ba56-46ec-b6a1-28b7ea310ce5 00:08:38.453 23:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.710 23:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 54af0df0-ba56-46ec-b6a1-28b7ea310ce5 00:08:38.968 23:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.226 [2024-11-09 23:42:05.409036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.483 23:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.741 23:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3355817 00:08:39.741 23:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:39.741 23:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:40.676 23:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 54af0df0-ba56-46ec-b6a1-28b7ea310ce5 MY_SNAPSHOT 00:08:40.935 23:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3b69e3af-26c3-40ca-b86c-7233abc9620d 00:08:40.935 23:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 54af0df0-ba56-46ec-b6a1-28b7ea310ce5 30 00:08:41.501 23:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3b69e3af-26c3-40ca-b86c-7233abc9620d MY_CLONE 00:08:41.759 23:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=448f9bb5-3beb-4802-9179-7d093fbb80dc 00:08:41.759 23:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 448f9bb5-3beb-4802-9179-7d093fbb80dc 00:08:42.695 23:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3355817 00:08:50.804 Initializing NVMe Controllers 00:08:50.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.804 Controller IO queue size 128, less than required. 00:08:50.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:50.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:50.804 Initialization complete. Launching workers. 00:08:50.804 ======================================================== 00:08:50.804 Latency(us) 00:08:50.804 Device Information : IOPS MiB/s Average min max 00:08:50.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8126.80 31.75 15755.71 431.59 173379.60 00:08:50.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8029.60 31.37 15955.64 3423.03 189277.83 00:08:50.804 ======================================================== 00:08:50.804 Total : 16156.40 63.11 15855.07 431.59 189277.83 00:08:50.804 00:08:50.804 23:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.804 23:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 54af0df0-ba56-46ec-b6a1-28b7ea310ce5 00:08:50.804 23:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd93cb27-215c-481a-b4dc-12dc5bdf8f1e 00:08:51.062 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:51.062 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:51.062 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:51.062 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.063 rmmod nvme_tcp 00:08:51.063 rmmod nvme_fabrics 00:08:51.063 rmmod nvme_keyring 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3355236 ']' 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3355236 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3355236 ']' 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3355236 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3355236 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3355236' 00:08:51.063 killing process with pid 3355236 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3355236 00:08:51.063 23:42:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3355236 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.439 23:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.973 00:08:54.973 real 0m21.284s 00:08:54.973 user 1m11.717s 00:08:54.973 sys 0m5.277s 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:54.973 ************************************ 00:08:54.973 END TEST nvmf_lvol 00:08:54.973 ************************************ 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.973 ************************************ 00:08:54.973 START TEST nvmf_lvs_grow 00:08:54.973 ************************************ 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.973 * Looking for test storage... 00:08:54.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:54.973 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.974 --rc genhtml_branch_coverage=1 00:08:54.974 --rc genhtml_function_coverage=1 00:08:54.974 --rc genhtml_legend=1 00:08:54.974 --rc geninfo_all_blocks=1 00:08:54.974 --rc geninfo_unexecuted_blocks=1 00:08:54.974 00:08:54.974 ' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.974 --rc genhtml_branch_coverage=1 00:08:54.974 --rc genhtml_function_coverage=1 00:08:54.974 --rc genhtml_legend=1 00:08:54.974 --rc geninfo_all_blocks=1 00:08:54.974 --rc geninfo_unexecuted_blocks=1 00:08:54.974 00:08:54.974 ' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.974 --rc genhtml_branch_coverage=1 00:08:54.974 --rc genhtml_function_coverage=1 00:08:54.974 --rc genhtml_legend=1 00:08:54.974 --rc geninfo_all_blocks=1 00:08:54.974 --rc geninfo_unexecuted_blocks=1 00:08:54.974 00:08:54.974 ' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.974 --rc genhtml_branch_coverage=1 00:08:54.974 --rc genhtml_function_coverage=1 00:08:54.974 --rc genhtml_legend=1 00:08:54.974 --rc geninfo_all_blocks=1 00:08:54.974 --rc geninfo_unexecuted_blocks=1 00:08:54.974 00:08:54.974 ' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.974 23:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.876 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.877 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.877 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.877 23:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:08:56.877 00:08:56.877 --- 10.0.0.2 ping statistics --- 00:08:56.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.877 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:56.877 00:08:56.877 --- 10.0.0.1 ping statistics --- 00:08:56.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.877 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3359737 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3359737 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3359737 ']' 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.877 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.878 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.878 23:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.135 [2024-11-09 23:42:23.145480] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:08:57.136 [2024-11-09 23:42:23.145645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.136 [2024-11-09 23:42:23.294666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.393 [2024-11-09 23:42:23.432245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.393 [2024-11-09 23:42:23.432317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.393 [2024-11-09 23:42:23.432343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.393 [2024-11-09 23:42:23.432368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.393 [2024-11-09 23:42:23.432388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.393 [2024-11-09 23:42:23.433958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.958 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:58.215 [2024-11-09 23:42:24.415469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.474 ************************************ 00:08:58.474 START TEST lvs_grow_clean 00:08:58.474 ************************************ 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.474 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.732 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:58.732 23:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.990 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:08:58.990 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:08:58.990 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.248 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.248 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.248 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 lvol 150 00:08:59.507 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=880dbd07-0cbf-4dcb-aba7-94b1d488bfac 00:08:59.507 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.507 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.765 [2024-11-09 23:42:25.861615] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.765 [2024-11-09 23:42:25.861735] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.765 true 00:08:59.765 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:08:59.765 23:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:00.024 23:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:00.024 23:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.307 23:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 880dbd07-0cbf-4dcb-aba7-94b1d488bfac 00:09:00.585 23:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.844 [2024-11-09 23:42:27.005384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.844 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3360311 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3360311 /var/tmp/bdevperf.sock 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3360311 ']' 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.102 23:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:01.360 [2024-11-09 23:42:27.375245] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:01.360 [2024-11-09 23:42:27.375384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360311 ] 00:09:01.360 [2024-11-09 23:42:27.519940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.618 [2024-11-09 23:42:27.655560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.552 23:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.552 23:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:09:02.552 23:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:02.810 Nvme0n1 00:09:02.810 23:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.068 [ 00:09:03.068 { 00:09:03.068 "name": "Nvme0n1", 00:09:03.068 "aliases": [ 00:09:03.068 "880dbd07-0cbf-4dcb-aba7-94b1d488bfac" 00:09:03.068 ], 00:09:03.068 "product_name": "NVMe disk", 00:09:03.068 "block_size": 4096, 00:09:03.068 "num_blocks": 38912, 00:09:03.068 "uuid": "880dbd07-0cbf-4dcb-aba7-94b1d488bfac", 00:09:03.068 "numa_id": 0, 00:09:03.068 "assigned_rate_limits": { 00:09:03.068 "rw_ios_per_sec": 0, 00:09:03.068 "rw_mbytes_per_sec": 0, 00:09:03.068 "r_mbytes_per_sec": 0, 00:09:03.068 "w_mbytes_per_sec": 0 00:09:03.068 }, 00:09:03.068 "claimed": false, 00:09:03.068 "zoned": false, 00:09:03.068 "supported_io_types": { 00:09:03.068 "read": true, 00:09:03.068 "write": true, 00:09:03.068 "unmap": true, 00:09:03.068 "flush": true, 00:09:03.068 "reset": true, 00:09:03.068 "nvme_admin": true, 00:09:03.068 "nvme_io": true, 00:09:03.068 "nvme_io_md": false, 00:09:03.068 "write_zeroes": true, 00:09:03.068 "zcopy": false, 00:09:03.068 "get_zone_info": false, 00:09:03.068 "zone_management": false, 00:09:03.068 "zone_append": false, 00:09:03.068 "compare": true, 00:09:03.068 "compare_and_write": true, 00:09:03.068 "abort": true, 00:09:03.068 "seek_hole": false, 00:09:03.068 "seek_data": false, 00:09:03.068 "copy": true, 00:09:03.068 "nvme_iov_md": false 00:09:03.068 }, 00:09:03.068 "memory_domains": [ 00:09:03.068 { 00:09:03.068 "dma_device_id": "system", 00:09:03.068 "dma_device_type": 1 00:09:03.068 } 00:09:03.068 ], 00:09:03.068 "driver_specific": { 00:09:03.068 "nvme": [ 00:09:03.068 { 00:09:03.068 "trid": { 00:09:03.068 "trtype": "TCP", 00:09:03.068 "adrfam": "IPv4", 00:09:03.068 "traddr": "10.0.0.2", 00:09:03.068 "trsvcid": "4420", 00:09:03.068 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.068 }, 00:09:03.068 "ctrlr_data": { 00:09:03.068 "cntlid": 1, 00:09:03.068 "vendor_id": "0x8086", 00:09:03.068 "model_number": "SPDK bdev Controller", 00:09:03.068 "serial_number": "SPDK0", 00:09:03.068 "firmware_revision": "25.01", 00:09:03.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.068 "oacs": { 00:09:03.068 "security": 0, 00:09:03.068 "format": 0, 00:09:03.068 "firmware": 0, 00:09:03.068 "ns_manage": 0 00:09:03.068 }, 00:09:03.068 "multi_ctrlr": true, 00:09:03.068 "ana_reporting": false 00:09:03.068 }, 00:09:03.068 "vs": { 00:09:03.068 "nvme_version": "1.3" 00:09:03.068 }, 00:09:03.068 "ns_data": { 00:09:03.068 "id": 1, 00:09:03.068 "can_share": true 00:09:03.068 } 00:09:03.068 } 00:09:03.068 ], 00:09:03.068 "mp_policy": "active_passive" 00:09:03.068 } 00:09:03.068 } 00:09:03.068 ] 00:09:03.068 23:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3360465 00:09:03.068 23:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.068 23:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:03.068 Running I/O for 10 seconds... 00:09:04.001 Latency(us) 00:09:04.001 [2024-11-09T22:42:30.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.001 Nvme0n1 : 1.00 10415.00 40.68 0.00 0.00 0.00 0.00 0.00 00:09:04.001 [2024-11-09T22:42:30.202Z] =================================================================================================================== 00:09:04.001 [2024-11-09T22:42:30.202Z] Total : 10415.00 40.68 0.00 0.00 0.00 0.00 0.00 00:09:04.001 00:09:04.936 23:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:05.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.194 Nvme0n1 : 2.00 10605.00 41.43 0.00 0.00 0.00 0.00 0.00 00:09:05.194 [2024-11-09T22:42:31.395Z] =================================================================================================================== 00:09:05.194 [2024-11-09T22:42:31.395Z] Total : 10605.00 41.43 0.00 0.00 0.00 0.00 0.00 00:09:05.194 00:09:05.194 true 00:09:05.452 23:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:05.452 23:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.710 23:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.710 23:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.710 23:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3360465 00:09:06.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.277 Nvme0n1 : 3.00 10637.33 41.55 0.00 0.00 0.00 0.00 0.00 00:09:06.277 [2024-11-09T22:42:32.478Z] =================================================================================================================== 00:09:06.277 [2024-11-09T22:42:32.478Z] Total : 10637.33 41.55 0.00 0.00 0.00 0.00 0.00 00:09:06.277 00:09:07.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.212 Nvme0n1 : 4.00 10740.25 41.95 0.00 0.00 0.00 0.00 0.00 00:09:07.212 [2024-11-09T22:42:33.413Z] =================================================================================================================== 00:09:07.212 [2024-11-09T22:42:33.413Z] Total : 10740.25 41.95 0.00 0.00 0.00 0.00 0.00 00:09:07.212 00:09:08.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.148 Nvme0n1 : 5.00 10802.00 42.20 0.00 0.00 0.00 0.00 0.00 00:09:08.148 [2024-11-09T22:42:34.349Z] =================================================================================================================== 00:09:08.148 [2024-11-09T22:42:34.349Z] Total : 10802.00 42.20 0.00 0.00 0.00 0.00 0.00 00:09:08.148 00:09:09.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.083 Nvme0n1 : 6.00 10864.33 42.44 0.00 0.00 0.00 0.00 0.00 00:09:09.083 [2024-11-09T22:42:35.284Z] =================================================================================================================== 00:09:09.083 [2024-11-09T22:42:35.284Z] Total : 10864.33 42.44 0.00 0.00 0.00 0.00 0.00 00:09:09.083 00:09:10.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.018 Nvme0n1 : 7.00 10890.71 42.54 0.00 0.00 0.00 0.00 0.00 00:09:10.018 [2024-11-09T22:42:36.219Z] =================================================================================================================== 00:09:10.018 [2024-11-09T22:42:36.219Z] Total : 10890.71 42.54 0.00 0.00 0.00 0.00 0.00 00:09:10.018 00:09:11.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.393 Nvme0n1 : 8.00 10910.50 42.62 0.00 0.00 0.00 0.00 0.00 00:09:11.393 [2024-11-09T22:42:37.594Z] =================================================================================================================== 00:09:11.393 [2024-11-09T22:42:37.594Z] Total : 10910.50 42.62 0.00 0.00 0.00 0.00 0.00 00:09:11.393 00:09:12.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.328 Nvme0n1 : 9.00 10925.89 42.68 0.00 0.00 0.00 0.00 0.00 00:09:12.328 [2024-11-09T22:42:38.529Z] =================================================================================================================== 00:09:12.328 [2024-11-09T22:42:38.529Z] Total : 10925.89 42.68 0.00 0.00 0.00 0.00 0.00 00:09:12.328 00:09:13.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.262 Nvme0n1 : 10.00 10950.90 42.78 0.00 0.00 0.00 0.00 0.00 00:09:13.262 [2024-11-09T22:42:39.463Z] =================================================================================================================== 00:09:13.262 [2024-11-09T22:42:39.464Z] Total : 10950.90 42.78 0.00 0.00 0.00 0.00 0.00 00:09:13.263 00:09:13.263 00:09:13.263 Latency(us) 00:09:13.263 [2024-11-09T22:42:39.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.263 Nvme0n1 : 10.01 10953.10 42.79 0.00 0.00 11679.66 3155.44 22427.88 00:09:13.263 [2024-11-09T22:42:39.464Z] =================================================================================================================== 00:09:13.263 [2024-11-09T22:42:39.464Z] Total : 10953.10 42.79 0.00 0.00 11679.66 3155.44 22427.88 00:09:13.263 { 00:09:13.263 "results": [ 00:09:13.263 { 00:09:13.263 "job": "Nvme0n1", 00:09:13.263 "core_mask": "0x2", 00:09:13.263 "workload": "randwrite", 00:09:13.263 "status": "finished", 00:09:13.263 "queue_depth": 128, 00:09:13.263 "io_size": 4096, 00:09:13.263 "runtime": 10.00968, 00:09:13.263 "iops": 10953.09740171514, 00:09:13.263 "mibps": 42.78553672544977, 00:09:13.263 "io_failed": 0, 00:09:13.263 "io_timeout": 0, 00:09:13.263 "avg_latency_us": 11679.661963050457, 00:09:13.263 "min_latency_us": 3155.437037037037, 00:09:13.263 "max_latency_us": 22427.875555555554 00:09:13.263 } 00:09:13.263 ], 00:09:13.263 "core_count": 1 00:09:13.263 } 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3360311 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3360311 ']' 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3360311 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3360311 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3360311' 00:09:13.263 killing process with pid 3360311 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3360311 00:09:13.263 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.263 00:09:13.263 Latency(us) 00:09:13.263 [2024-11-09T22:42:39.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.263 [2024-11-09T22:42:39.464Z] =================================================================================================================== 00:09:13.263 [2024-11-09T22:42:39.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.263 23:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3360311 00:09:14.197 23:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.454 23:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.711 23:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:14.711 23:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:14.969 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:14.969 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:14.969 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:15.226 [2024-11-09 23:42:41.279811] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.226 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.227 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.227 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.227 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.227 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:15.227 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:15.485 request: 00:09:15.485 { 00:09:15.485 "uuid": "d26bf7cb-60b7-4dd3-8672-c092d1fea711", 00:09:15.485 "method": "bdev_lvol_get_lvstores", 00:09:15.485 "req_id": 1 00:09:15.485 } 00:09:15.485 Got JSON-RPC error response 00:09:15.485 response: 00:09:15.485 { 00:09:15.485 "code": -19, 00:09:15.485 "message": "No such device" 00:09:15.485 } 00:09:15.485 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:15.485 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:15.485 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:15.485 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:15.485 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.051 aio_bdev 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 880dbd07-0cbf-4dcb-aba7-94b1d488bfac 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=880dbd07-0cbf-4dcb-aba7-94b1d488bfac 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:16.051 23:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.308 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 880dbd07-0cbf-4dcb-aba7-94b1d488bfac -t 2000 00:09:16.566 [ 00:09:16.566 { 00:09:16.566 "name": "880dbd07-0cbf-4dcb-aba7-94b1d488bfac", 00:09:16.566 "aliases": [ 00:09:16.566 "lvs/lvol" 00:09:16.566 ], 00:09:16.566 "product_name": "Logical Volume", 00:09:16.566 "block_size": 4096, 00:09:16.566 "num_blocks": 38912, 00:09:16.566 "uuid": "880dbd07-0cbf-4dcb-aba7-94b1d488bfac", 00:09:16.566 "assigned_rate_limits": { 00:09:16.566 "rw_ios_per_sec": 0, 00:09:16.566 "rw_mbytes_per_sec": 0, 00:09:16.566 "r_mbytes_per_sec": 0, 00:09:16.567 "w_mbytes_per_sec": 0 00:09:16.567 }, 00:09:16.567 "claimed": false, 00:09:16.567 "zoned": false, 00:09:16.567 "supported_io_types": { 00:09:16.567 "read": true, 00:09:16.567 "write": true, 00:09:16.567 "unmap": true, 00:09:16.567 "flush": false, 00:09:16.567 "reset": true, 00:09:16.567 "nvme_admin": false, 00:09:16.567 "nvme_io": false, 00:09:16.567 "nvme_io_md": false, 00:09:16.567 "write_zeroes": true, 00:09:16.567 "zcopy": false, 00:09:16.567 "get_zone_info": false, 00:09:16.567 "zone_management": false, 00:09:16.567 "zone_append": false, 00:09:16.567 "compare": false, 00:09:16.567 "compare_and_write": false, 00:09:16.567 "abort": false, 00:09:16.567 "seek_hole": true, 00:09:16.567 "seek_data": true, 00:09:16.567 "copy": false, 00:09:16.567 "nvme_iov_md": false 00:09:16.567 }, 00:09:16.567 "driver_specific": { 00:09:16.567 "lvol": { 00:09:16.567 "lvol_store_uuid": "d26bf7cb-60b7-4dd3-8672-c092d1fea711", 00:09:16.567 "base_bdev": "aio_bdev", 00:09:16.567 "thin_provision": false, 00:09:16.567 "num_allocated_clusters": 38, 00:09:16.567 "snapshot": false, 00:09:16.567 "clone": false, 00:09:16.567 "esnap_clone": false 00:09:16.567 } 00:09:16.567 } 00:09:16.567 } 00:09:16.567 ] 00:09:16.567 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:09:16.567 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:16.567 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:16.825 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:16.825 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:16.825 23:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:17.084 23:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:17.084 23:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 880dbd07-0cbf-4dcb-aba7-94b1d488bfac 00:09:17.342 23:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d26bf7cb-60b7-4dd3-8672-c092d1fea711 00:09:17.600 23:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:18.166 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.166 00:09:18.166 real 0m19.644s 00:09:18.166 user 0m19.383s 00:09:18.166 sys 0m1.932s 00:09:18.166 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:18.167 ************************************ 00:09:18.167 END TEST lvs_grow_clean 00:09:18.167 ************************************ 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.167 ************************************ 00:09:18.167 START TEST lvs_grow_dirty 00:09:18.167 ************************************ 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.167 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.425 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.425 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.683 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e455b8f3-35bc-44cc-ba2b-649959879631 00:09:18.683 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:18.683 23:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:18.941 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:18.941 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:18.941 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e455b8f3-35bc-44cc-ba2b-649959879631 lvol 150 00:09:19.199 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:19.200 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.200 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:19.458 [2024-11-09 23:42:45.640707] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:19.458 [2024-11-09 23:42:45.640845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:19.458 true 00:09:19.458 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:19.458 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:20.023 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:20.023 23:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.282 23:42:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:20.540 23:42:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.797 [2024-11-09 23:42:46.784469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.797 23:42:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3362643 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3362643 /var/tmp/bdevperf.sock 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3362643 ']' 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.056 23:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.056 [2024-11-09 23:42:47.157979] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:21.056 [2024-11-09 23:42:47.158117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362643 ] 00:09:21.314 [2024-11-09 23:42:47.302872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.314 [2024-11-09 23:42:47.440817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.248 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.248 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:22.248 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:22.505 Nvme0n1 00:09:22.505 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:22.763 [ 00:09:22.763 { 00:09:22.763 "name": "Nvme0n1", 00:09:22.763 "aliases": [ 00:09:22.763 "383fe713-d36f-4a06-b177-2b2f254e5acb" 00:09:22.763 ], 00:09:22.763 "product_name": "NVMe disk", 00:09:22.763 "block_size": 4096, 00:09:22.763 "num_blocks": 38912, 00:09:22.763 "uuid": "383fe713-d36f-4a06-b177-2b2f254e5acb", 00:09:22.763 "numa_id": 0, 00:09:22.763 "assigned_rate_limits": { 00:09:22.763 "rw_ios_per_sec": 0, 00:09:22.763 "rw_mbytes_per_sec": 0, 00:09:22.763 "r_mbytes_per_sec": 0, 00:09:22.763 "w_mbytes_per_sec": 0 00:09:22.763 }, 00:09:22.763 "claimed": false, 00:09:22.763 "zoned": false, 00:09:22.763 "supported_io_types": { 00:09:22.763 "read": true, 00:09:22.763 "write": true, 00:09:22.763 "unmap": true, 00:09:22.763 "flush": true, 00:09:22.763 "reset": true, 00:09:22.763 "nvme_admin": true, 00:09:22.763 "nvme_io": true, 00:09:22.763 "nvme_io_md": false, 00:09:22.763 "write_zeroes": true, 00:09:22.763 "zcopy": false, 00:09:22.763 "get_zone_info": false, 00:09:22.763 "zone_management": false, 00:09:22.763 "zone_append": false, 00:09:22.763 "compare": true, 00:09:22.763 "compare_and_write": true, 00:09:22.763 "abort": true, 00:09:22.763 "seek_hole": false, 00:09:22.763 "seek_data": false, 00:09:22.763 "copy": true, 00:09:22.763 "nvme_iov_md": false 00:09:22.763 }, 00:09:22.763 "memory_domains": [ 00:09:22.763 { 00:09:22.763 "dma_device_id": "system", 00:09:22.763 "dma_device_type": 1 00:09:22.763 } 00:09:22.763 ], 00:09:22.763 "driver_specific": { 00:09:22.763 "nvme": [ 00:09:22.763 { 00:09:22.763 "trid": { 00:09:22.763 "trtype": "TCP", 00:09:22.763 "adrfam": "IPv4", 00:09:22.763 "traddr": "10.0.0.2", 00:09:22.763 "trsvcid": "4420", 00:09:22.763 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:22.763 }, 00:09:22.763 "ctrlr_data": { 00:09:22.763 "cntlid": 1, 00:09:22.763 "vendor_id": "0x8086", 00:09:22.763 "model_number": "SPDK bdev Controller", 00:09:22.763 "serial_number": "SPDK0", 00:09:22.763 "firmware_revision": "25.01", 00:09:22.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:22.763 "oacs": { 00:09:22.763 "security": 0, 00:09:22.763 "format": 0, 00:09:22.763 "firmware": 0, 00:09:22.763 "ns_manage": 0 00:09:22.763 }, 00:09:22.763 "multi_ctrlr": true, 00:09:22.763 "ana_reporting": false 00:09:22.763 }, 00:09:22.763 "vs": { 00:09:22.763 "nvme_version": "1.3" 00:09:22.763 }, 00:09:22.763 "ns_data": { 00:09:22.763 "id": 1, 00:09:22.763 "can_share": true 00:09:22.763 } 00:09:22.763 } 00:09:22.763 ], 00:09:22.763 "mp_policy": "active_passive" 00:09:22.763 } 00:09:22.763 } 00:09:22.763 ] 00:09:22.763 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3362913 00:09:22.763 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.764 23:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.022 Running I/O for 10 seconds... 00:09:23.957 Latency(us) 00:09:23.957 [2024-11-09T22:42:50.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.957 Nvme0n1 : 1.00 10670.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:23.957 [2024-11-09T22:42:50.158Z] =================================================================================================================== 00:09:23.957 [2024-11-09T22:42:50.158Z] Total : 10670.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:23.957 00:09:24.890 23:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:24.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.890 Nvme0n1 : 2.00 10732.50 41.92 0.00 0.00 0.00 0.00 0.00 00:09:24.890 [2024-11-09T22:42:51.091Z] =================================================================================================================== 00:09:24.890 [2024-11-09T22:42:51.091Z] Total : 10732.50 41.92 0.00 0.00 0.00 0.00 0.00 00:09:24.890 00:09:25.148 true 00:09:25.148 23:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:25.148 23:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.406 23:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.406 23:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.406 23:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3362913 00:09:25.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.972 Nvme0n1 : 3.00 10880.33 42.50 0.00 0.00 0.00 0.00 0.00 00:09:25.972 [2024-11-09T22:42:52.173Z] =================================================================================================================== 00:09:25.972 [2024-11-09T22:42:52.173Z] Total : 10880.33 42.50 0.00 0.00 0.00 0.00 0.00 00:09:25.972 00:09:26.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.906 Nvme0n1 : 4.00 10970.25 42.85 0.00 0.00 0.00 0.00 0.00 00:09:26.906 [2024-11-09T22:42:53.107Z] =================================================================================================================== 00:09:26.906 [2024-11-09T22:42:53.107Z] Total : 10970.25 42.85 0.00 0.00 0.00 0.00 0.00 00:09:26.906 00:09:27.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.840 Nvme0n1 : 5.00 10998.60 42.96 0.00 0.00 0.00 0.00 0.00 00:09:27.840 [2024-11-09T22:42:54.041Z] =================================================================================================================== 00:09:27.840 [2024-11-09T22:42:54.041Z] Total : 10998.60 42.96 0.00 0.00 0.00 0.00 0.00 00:09:27.840 00:09:29.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.213 Nvme0n1 : 6.00 11049.33 43.16 0.00 0.00 0.00 0.00 0.00 00:09:29.213 [2024-11-09T22:42:55.414Z] =================================================================================================================== 00:09:29.213 [2024-11-09T22:42:55.414Z] Total : 11049.33 43.16 0.00 0.00 0.00 0.00 0.00 00:09:29.213 00:09:30.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.146 Nvme0n1 : 7.00 11094.71 43.34 0.00 0.00 0.00 0.00 0.00 00:09:30.146 [2024-11-09T22:42:56.347Z] =================================================================================================================== 00:09:30.146 [2024-11-09T22:42:56.347Z] Total : 11094.71 43.34 0.00 0.00 0.00 0.00 0.00 00:09:30.146 00:09:31.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.081 Nvme0n1 : 8.00 11096.88 43.35 0.00 0.00 0.00 0.00 0.00 00:09:31.081 [2024-11-09T22:42:57.282Z] =================================================================================================================== 00:09:31.081 [2024-11-09T22:42:57.282Z] Total : 11096.88 43.35 0.00 0.00 0.00 0.00 0.00 00:09:31.081 00:09:32.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.047 Nvme0n1 : 9.00 11105.67 43.38 0.00 0.00 0.00 0.00 0.00 00:09:32.047 [2024-11-09T22:42:58.248Z] =================================================================================================================== 00:09:32.047 [2024-11-09T22:42:58.248Z] Total : 11105.67 43.38 0.00 0.00 0.00 0.00 0.00 00:09:32.047 00:09:32.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.998 Nvme0n1 : 10.00 11112.70 43.41 0.00 0.00 0.00 0.00 0.00 00:09:32.998 [2024-11-09T22:42:59.199Z] =================================================================================================================== 00:09:32.998 [2024-11-09T22:42:59.199Z] Total : 11112.70 43.41 0.00 0.00 0.00 0.00 0.00 00:09:32.998 00:09:32.998 00:09:32.998 Latency(us) 00:09:32.998 [2024-11-09T22:42:59.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.998 Nvme0n1 : 10.01 11118.13 43.43 0.00 0.00 11506.38 4004.98 22330.79 00:09:32.998 [2024-11-09T22:42:59.199Z] =================================================================================================================== 00:09:32.998 [2024-11-09T22:42:59.199Z] Total : 11118.13 43.43 0.00 0.00 11506.38 4004.98 22330.79 00:09:32.998 { 00:09:32.998 "results": [ 00:09:32.998 { 00:09:32.998 "job": "Nvme0n1", 00:09:32.998 "core_mask": "0x2", 00:09:32.998 "workload": "randwrite", 00:09:32.998 "status": "finished", 00:09:32.998 "queue_depth": 128, 00:09:32.998 "io_size": 4096, 00:09:32.998 "runtime": 10.00663, 00:09:32.998 "iops": 11118.128680684706, 00:09:32.998 "mibps": 43.43019015892463, 00:09:32.998 "io_failed": 0, 00:09:32.998 "io_timeout": 0, 00:09:32.998 "avg_latency_us": 11506.377738961379, 00:09:32.998 "min_latency_us": 4004.9777777777776, 00:09:32.998 "max_latency_us": 22330.785185185185 00:09:32.998 } 00:09:32.998 ], 00:09:32.998 "core_count": 1 00:09:32.998 } 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3362643 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3362643 ']' 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3362643 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3362643 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3362643' 00:09:32.998 killing process with pid 3362643 00:09:32.998 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3362643 00:09:32.998 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.998 00:09:32.999 Latency(us) 00:09:32.999 [2024-11-09T22:42:59.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.999 [2024-11-09T22:42:59.200Z] =================================================================================================================== 00:09:32.999 [2024-11-09T22:42:59.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.999 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3362643 00:09:33.933 23:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.191 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.448 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:34.448 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3359737 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3359737 00:09:34.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3359737 Killed "${NVMF_APP[@]}" "$@" 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3364255 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3364255 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3364255 ']' 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:34.707 23:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.965 [2024-11-09 23:43:00.945807] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:34.965 [2024-11-09 23:43:00.945950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.965 [2024-11-09 23:43:01.098673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.223 [2024-11-09 23:43:01.233789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.223 [2024-11-09 23:43:01.233868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.223 [2024-11-09 23:43:01.233894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.223 [2024-11-09 23:43:01.233918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.223 [2024-11-09 23:43:01.233938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.223 [2024-11-09 23:43:01.235543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.790 23:43:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.356 [2024-11-09 23:43:02.296040] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:36.356 [2024-11-09 23:43:02.296286] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:36.356 [2024-11-09 23:43:02.296368] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:36.356 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.614 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 383fe713-d36f-4a06-b177-2b2f254e5acb -t 2000 00:09:36.873 [ 00:09:36.873 { 00:09:36.873 "name": "383fe713-d36f-4a06-b177-2b2f254e5acb", 00:09:36.873 "aliases": [ 00:09:36.873 "lvs/lvol" 00:09:36.873 ], 00:09:36.873 "product_name": "Logical Volume", 00:09:36.873 "block_size": 4096, 00:09:36.873 "num_blocks": 38912, 00:09:36.873 "uuid": "383fe713-d36f-4a06-b177-2b2f254e5acb", 00:09:36.873 "assigned_rate_limits": { 00:09:36.873 "rw_ios_per_sec": 0, 00:09:36.873 "rw_mbytes_per_sec": 0, 00:09:36.873 "r_mbytes_per_sec": 0, 00:09:36.873 "w_mbytes_per_sec": 0 00:09:36.873 }, 00:09:36.873 "claimed": false, 00:09:36.873 "zoned": false, 00:09:36.873 "supported_io_types": { 00:09:36.873 "read": true, 00:09:36.873 "write": true, 00:09:36.873 "unmap": true, 00:09:36.873 "flush": false, 00:09:36.873 "reset": true, 00:09:36.873 "nvme_admin": false, 00:09:36.873 "nvme_io": false, 00:09:36.873 "nvme_io_md": false, 00:09:36.873 "write_zeroes": true, 00:09:36.873 "zcopy": false, 00:09:36.873 "get_zone_info": false, 00:09:36.873 "zone_management": false, 00:09:36.873 "zone_append": false, 00:09:36.873 "compare": false, 00:09:36.873 "compare_and_write": false, 00:09:36.873 "abort": false, 00:09:36.873 "seek_hole": true, 00:09:36.873 "seek_data": true, 00:09:36.873 "copy": false, 00:09:36.873 "nvme_iov_md": false 00:09:36.873 }, 00:09:36.873 "driver_specific": { 00:09:36.873 "lvol": { 00:09:36.873 "lvol_store_uuid": "e455b8f3-35bc-44cc-ba2b-649959879631", 00:09:36.873 "base_bdev": "aio_bdev", 00:09:36.873 "thin_provision": false, 00:09:36.873 "num_allocated_clusters": 38, 00:09:36.873 "snapshot": false, 00:09:36.873 "clone": false, 00:09:36.873 "esnap_clone": false 00:09:36.873 } 00:09:36.873 } 00:09:36.873 } 00:09:36.873 ] 00:09:36.873 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:36.873 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:36.873 23:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:37.132 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:37.132 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:37.132 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:37.390 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:37.390 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.648 [2024-11-09 23:43:03.668763] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.648 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.649 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.649 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.649 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.649 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:37.649 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:37.907 request: 00:09:37.907 { 00:09:37.907 "uuid": "e455b8f3-35bc-44cc-ba2b-649959879631", 00:09:37.907 "method": "bdev_lvol_get_lvstores", 00:09:37.907 "req_id": 1 00:09:37.907 } 00:09:37.907 Got JSON-RPC error response 00:09:37.907 response: 00:09:37.907 { 00:09:37.907 "code": -19, 00:09:37.907 "message": "No such device" 00:09:37.907 } 00:09:37.907 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:37.907 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.907 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.907 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.907 23:43:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.165 aio_bdev 00:09:38.165 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:38.165 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:38.166 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:38.166 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:38.166 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:38.166 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:38.166 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:38.424 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 383fe713-d36f-4a06-b177-2b2f254e5acb -t 2000 00:09:38.682 [ 00:09:38.682 { 00:09:38.682 "name": "383fe713-d36f-4a06-b177-2b2f254e5acb", 00:09:38.682 "aliases": [ 00:09:38.682 "lvs/lvol" 00:09:38.682 ], 00:09:38.682 "product_name": "Logical Volume", 00:09:38.682 "block_size": 4096, 00:09:38.682 "num_blocks": 38912, 00:09:38.682 "uuid": "383fe713-d36f-4a06-b177-2b2f254e5acb", 00:09:38.682 "assigned_rate_limits": { 00:09:38.682 "rw_ios_per_sec": 0, 00:09:38.682 "rw_mbytes_per_sec": 0, 00:09:38.682 "r_mbytes_per_sec": 0, 00:09:38.682 "w_mbytes_per_sec": 0 00:09:38.682 }, 00:09:38.682 "claimed": false, 00:09:38.682 "zoned": false, 00:09:38.682 "supported_io_types": { 00:09:38.682 "read": true, 00:09:38.682 "write": true, 00:09:38.682 "unmap": true, 00:09:38.682 "flush": false, 00:09:38.682 "reset": true, 00:09:38.682 "nvme_admin": false, 00:09:38.682 "nvme_io": false, 00:09:38.682 "nvme_io_md": false, 00:09:38.682 "write_zeroes": true, 00:09:38.682 "zcopy": false, 00:09:38.682 "get_zone_info": false, 00:09:38.682 "zone_management": false, 00:09:38.682 "zone_append": false, 00:09:38.682 "compare": false, 00:09:38.682 "compare_and_write": false, 00:09:38.682 "abort": false, 00:09:38.682 "seek_hole": true, 00:09:38.682 "seek_data": true, 00:09:38.682 "copy": false, 00:09:38.682 "nvme_iov_md": false 00:09:38.682 }, 00:09:38.682 "driver_specific": { 00:09:38.682 "lvol": { 00:09:38.682 "lvol_store_uuid": "e455b8f3-35bc-44cc-ba2b-649959879631", 00:09:38.682 "base_bdev": "aio_bdev", 00:09:38.682 "thin_provision": false, 00:09:38.682 "num_allocated_clusters": 38, 00:09:38.682 "snapshot": false, 00:09:38.682 "clone": false, 00:09:38.682 "esnap_clone": false 00:09:38.682 } 00:09:38.682 } 00:09:38.682 } 00:09:38.682 ] 00:09:38.682 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:38.682 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:38.682 23:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:38.940 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:38.940 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:38.940 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:39.198 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:39.199 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 383fe713-d36f-4a06-b177-2b2f254e5acb 00:09:39.765 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e455b8f3-35bc-44cc-ba2b-649959879631 00:09:39.765 23:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:40.023 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:40.281 00:09:40.281 real 0m22.087s 00:09:40.281 user 0m55.981s 00:09:40.281 sys 0m4.668s 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.281 ************************************ 00:09:40.281 END TEST lvs_grow_dirty 00:09:40.281 ************************************ 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:40.281 nvmf_trace.0 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.281 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.281 rmmod nvme_tcp 00:09:40.282 rmmod nvme_fabrics 00:09:40.282 rmmod nvme_keyring 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3364255 ']' 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3364255 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3364255 ']' 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3364255 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3364255 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3364255' 00:09:40.282 killing process with pid 3364255 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3364255 00:09:40.282 23:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3364255 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.654 23:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.553 00:09:43.553 real 0m48.974s 00:09:43.553 user 1m23.452s 00:09:43.553 sys 0m8.725s 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.553 ************************************ 00:09:43.553 END TEST nvmf_lvs_grow 00:09:43.553 ************************************ 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.553 ************************************ 00:09:43.553 START TEST nvmf_bdev_io_wait 00:09:43.553 ************************************ 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:43.553 * Looking for test storage... 00:09:43.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:43.553 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:43.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.811 --rc genhtml_branch_coverage=1 00:09:43.811 --rc genhtml_function_coverage=1 00:09:43.811 --rc genhtml_legend=1 00:09:43.811 --rc geninfo_all_blocks=1 00:09:43.811 --rc geninfo_unexecuted_blocks=1 00:09:43.811 00:09:43.811 ' 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:43.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.811 --rc genhtml_branch_coverage=1 00:09:43.811 --rc genhtml_function_coverage=1 00:09:43.811 --rc genhtml_legend=1 00:09:43.811 --rc geninfo_all_blocks=1 00:09:43.811 --rc geninfo_unexecuted_blocks=1 00:09:43.811 00:09:43.811 ' 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:43.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.811 --rc genhtml_branch_coverage=1 00:09:43.811 --rc genhtml_function_coverage=1 00:09:43.811 --rc genhtml_legend=1 00:09:43.811 --rc geninfo_all_blocks=1 00:09:43.811 --rc geninfo_unexecuted_blocks=1 00:09:43.811 00:09:43.811 ' 00:09:43.811 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:43.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.811 --rc genhtml_branch_coverage=1 00:09:43.811 --rc genhtml_function_coverage=1 00:09:43.811 --rc genhtml_legend=1 00:09:43.811 --rc geninfo_all_blocks=1 00:09:43.812 --rc geninfo_unexecuted_blocks=1 00:09:43.812 00:09:43.812 ' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.812 23:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:45.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:45.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:45.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.714 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:45.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.715 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.973 23:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:09:45.973 00:09:45.973 --- 10.0.0.2 ping statistics --- 00:09:45.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.973 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:45.973 00:09:45.973 --- 10.0.0.1 ping statistics --- 00:09:45.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.973 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3367056 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3367056 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3367056 ']' 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:45.973 23:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.973 [2024-11-09 23:43:12.162956] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:45.973 [2024-11-09 23:43:12.163087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.231 [2024-11-09 23:43:12.316622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.488 [2024-11-09 23:43:12.459441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.488 [2024-11-09 23:43:12.459517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.488 [2024-11-09 23:43:12.459555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.488 [2024-11-09 23:43:12.459604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.488 [2024-11-09 23:43:12.459638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.488 [2024-11-09 23:43:12.462454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.488 [2024-11-09 23:43:12.462524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.488 [2024-11-09 23:43:12.462639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.488 [2024-11-09 23:43:12.462642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.054 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.311 [2024-11-09 23:43:13.432557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.311 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.570 Malloc0 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.570 [2024-11-09 23:43:13.536453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3367247 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3367250 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.570 { 00:09:47.570 "params": { 00:09:47.570 "name": "Nvme$subsystem", 00:09:47.570 "trtype": "$TEST_TRANSPORT", 00:09:47.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.570 "adrfam": "ipv4", 00:09:47.570 "trsvcid": "$NVMF_PORT", 00:09:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.570 "hdgst": ${hdgst:-false}, 00:09:47.570 "ddgst": ${ddgst:-false} 00:09:47.570 }, 00:09:47.570 "method": "bdev_nvme_attach_controller" 00:09:47.570 } 00:09:47.570 EOF 00:09:47.570 )") 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3367253 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.570 { 00:09:47.570 "params": { 00:09:47.570 "name": "Nvme$subsystem", 00:09:47.570 "trtype": "$TEST_TRANSPORT", 00:09:47.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.570 "adrfam": "ipv4", 00:09:47.570 "trsvcid": "$NVMF_PORT", 00:09:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.570 "hdgst": ${hdgst:-false}, 00:09:47.570 "ddgst": ${ddgst:-false} 00:09:47.570 }, 00:09:47.570 "method": "bdev_nvme_attach_controller" 00:09:47.570 } 00:09:47.570 EOF 00:09:47.570 )") 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3367257 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.570 { 00:09:47.570 "params": { 00:09:47.570 "name": "Nvme$subsystem", 00:09:47.570 "trtype": "$TEST_TRANSPORT", 00:09:47.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.570 "adrfam": "ipv4", 00:09:47.570 "trsvcid": "$NVMF_PORT", 00:09:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.570 "hdgst": ${hdgst:-false}, 00:09:47.570 "ddgst": ${ddgst:-false} 00:09:47.570 }, 00:09:47.570 "method": "bdev_nvme_attach_controller" 00:09:47.570 } 00:09:47.570 EOF 00:09:47.570 )") 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.570 { 00:09:47.570 "params": { 00:09:47.570 "name": "Nvme$subsystem", 00:09:47.570 "trtype": "$TEST_TRANSPORT", 00:09:47.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.570 "adrfam": "ipv4", 00:09:47.570 "trsvcid": "$NVMF_PORT", 00:09:47.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.570 "hdgst": ${hdgst:-false}, 00:09:47.570 "ddgst": ${ddgst:-false} 00:09:47.570 }, 00:09:47.570 "method": "bdev_nvme_attach_controller" 00:09:47.570 } 00:09:47.570 EOF 00:09:47.570 )") 00:09:47.570 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3367247 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.571 "params": { 00:09:47.571 "name": "Nvme1", 00:09:47.571 "trtype": "tcp", 00:09:47.571 "traddr": "10.0.0.2", 00:09:47.571 "adrfam": "ipv4", 00:09:47.571 "trsvcid": "4420", 00:09:47.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.571 "hdgst": false, 00:09:47.571 "ddgst": false 00:09:47.571 }, 00:09:47.571 "method": "bdev_nvme_attach_controller" 00:09:47.571 }' 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.571 "params": { 00:09:47.571 "name": "Nvme1", 00:09:47.571 "trtype": "tcp", 00:09:47.571 "traddr": "10.0.0.2", 00:09:47.571 "adrfam": "ipv4", 00:09:47.571 "trsvcid": "4420", 00:09:47.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.571 "hdgst": false, 00:09:47.571 "ddgst": false 00:09:47.571 }, 00:09:47.571 "method": "bdev_nvme_attach_controller" 00:09:47.571 }' 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.571 "params": { 00:09:47.571 "name": "Nvme1", 00:09:47.571 "trtype": "tcp", 00:09:47.571 "traddr": "10.0.0.2", 00:09:47.571 "adrfam": "ipv4", 00:09:47.571 "trsvcid": "4420", 00:09:47.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.571 "hdgst": false, 00:09:47.571 "ddgst": false 00:09:47.571 }, 00:09:47.571 "method": "bdev_nvme_attach_controller" 00:09:47.571 }' 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:47.571 23:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.571 "params": { 00:09:47.571 "name": "Nvme1", 00:09:47.571 "trtype": "tcp", 00:09:47.571 "traddr": "10.0.0.2", 00:09:47.571 "adrfam": "ipv4", 00:09:47.571 "trsvcid": "4420", 00:09:47.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.571 "hdgst": false, 00:09:47.571 "ddgst": false 00:09:47.571 }, 00:09:47.571 "method": "bdev_nvme_attach_controller" 00:09:47.571 }' 00:09:47.571 [2024-11-09 23:43:13.626664] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:47.571 [2024-11-09 23:43:13.626666] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:47.571 [2024-11-09 23:43:13.626819] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-09 23:43:13.626822] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:47.571 --proc-type=auto ] 00:09:47.571 [2024-11-09 23:43:13.628102] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:47.571 [2024-11-09 23:43:13.628102] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:47.571 [2024-11-09 23:43:13.628233] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-09 23:43:13.628234] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:47.571 --proc-type=auto ] 00:09:47.829 [2024-11-09 23:43:13.876173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.829 [2024-11-09 23:43:14.000595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:47.829 [2024-11-09 23:43:14.019315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.087 [2024-11-09 23:43:14.070039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.087 [2024-11-09 23:43:14.144025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:48.087 [2024-11-09 23:43:14.151503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.087 [2024-11-09 23:43:14.189890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:48.087 [2024-11-09 23:43:14.267951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:48.653 Running I/O for 1 seconds... 00:09:48.653 Running I/O for 1 seconds... 00:09:48.653 Running I/O for 1 seconds... 00:09:48.653 Running I/O for 1 seconds... 00:09:49.589 8033.00 IOPS, 31.38 MiB/s 00:09:49.589 Latency(us) 00:09:49.589 [2024-11-09T22:43:15.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.589 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:49.589 Nvme1n1 : 1.01 8089.07 31.60 0.00 0.00 15742.33 5145.79 23495.87 00:09:49.589 [2024-11-09T22:43:15.790Z] =================================================================================================================== 00:09:49.589 [2024-11-09T22:43:15.790Z] Total : 8089.07 31.60 0.00 0.00 15742.33 5145.79 23495.87 00:09:49.589 7092.00 IOPS, 27.70 MiB/s 00:09:49.589 Latency(us) 00:09:49.589 [2024-11-09T22:43:15.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.589 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:49.589 Nvme1n1 : 1.01 7153.93 27.95 0.00 0.00 17794.29 4344.79 32039.82 00:09:49.589 [2024-11-09T22:43:15.790Z] =================================================================================================================== 00:09:49.589 [2024-11-09T22:43:15.790Z] Total : 7153.93 27.95 0.00 0.00 17794.29 4344.79 32039.82 00:09:49.589 6497.00 IOPS, 25.38 MiB/s 00:09:49.589 Latency(us) 00:09:49.589 [2024-11-09T22:43:15.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.589 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:49.589 Nvme1n1 : 1.01 6550.99 25.59 0.00 0.00 19419.52 10291.58 31068.92 00:09:49.589 [2024-11-09T22:43:15.790Z] =================================================================================================================== 00:09:49.589 [2024-11-09T22:43:15.790Z] Total : 6550.99 25.59 0.00 0.00 19419.52 10291.58 31068.92 00:09:49.847 139568.00 IOPS, 545.19 MiB/s 00:09:49.847 Latency(us) 00:09:49.847 [2024-11-09T22:43:16.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.847 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:49.847 Nvme1n1 : 1.00 139279.75 544.06 0.00 0.00 914.34 380.78 2014.63 00:09:49.847 [2024-11-09T22:43:16.048Z] =================================================================================================================== 00:09:49.847 [2024-11-09T22:43:16.048Z] Total : 139279.75 544.06 0.00 0.00 914.34 380.78 2014.63 00:09:50.105 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3367250 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3367253 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3367257 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.364 rmmod nvme_tcp 00:09:50.364 rmmod nvme_fabrics 00:09:50.364 rmmod nvme_keyring 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3367056 ']' 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3367056 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3367056 ']' 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3367056 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.364 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3367056 00:09:50.622 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.622 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.622 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3367056' 00:09:50.622 killing process with pid 3367056 00:09:50.622 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3367056 00:09:50.622 23:43:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3367056 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.557 23:43:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.464 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.464 00:09:53.464 real 0m9.970s 00:09:53.464 user 0m28.310s 00:09:53.464 sys 0m4.492s 00:09:53.464 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.464 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.464 ************************************ 00:09:53.464 END TEST nvmf_bdev_io_wait 00:09:53.464 ************************************ 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.723 ************************************ 00:09:53.723 START TEST nvmf_queue_depth 00:09:53.723 ************************************ 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.723 * Looking for test storage... 00:09:53.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.723 --rc genhtml_branch_coverage=1 00:09:53.723 --rc genhtml_function_coverage=1 00:09:53.723 --rc genhtml_legend=1 00:09:53.723 --rc geninfo_all_blocks=1 00:09:53.723 --rc geninfo_unexecuted_blocks=1 00:09:53.723 00:09:53.723 ' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.723 --rc genhtml_branch_coverage=1 00:09:53.723 --rc genhtml_function_coverage=1 00:09:53.723 --rc genhtml_legend=1 00:09:53.723 --rc geninfo_all_blocks=1 00:09:53.723 --rc geninfo_unexecuted_blocks=1 00:09:53.723 00:09:53.723 ' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.723 --rc genhtml_branch_coverage=1 00:09:53.723 --rc genhtml_function_coverage=1 00:09:53.723 --rc genhtml_legend=1 00:09:53.723 --rc geninfo_all_blocks=1 00:09:53.723 --rc geninfo_unexecuted_blocks=1 00:09:53.723 00:09:53.723 ' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.723 --rc genhtml_branch_coverage=1 00:09:53.723 --rc genhtml_function_coverage=1 00:09:53.723 --rc genhtml_legend=1 00:09:53.723 --rc geninfo_all_blocks=1 00:09:53.723 --rc geninfo_unexecuted_blocks=1 00:09:53.723 00:09:53.723 ' 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.723 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.724 23:43:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.256 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:56.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:56.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:56.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:56.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.257 23:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:09:56.257 00:09:56.257 --- 10.0.0.2 ping statistics --- 00:09:56.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.257 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:09:56.257 00:09:56.257 --- 10.0.0.1 ping statistics --- 00:09:56.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.257 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3369713 00:09:56.257 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3369713 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3369713 ']' 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:56.258 23:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:56.258 [2024-11-09 23:43:22.225895] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:56.258 [2024-11-09 23:43:22.226055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.258 [2024-11-09 23:43:22.387312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.516 [2024-11-09 23:43:22.524163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.516 [2024-11-09 23:43:22.524252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.516 [2024-11-09 23:43:22.524278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.516 [2024-11-09 23:43:22.524302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.516 [2024-11-09 23:43:22.524321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.516 [2024-11-09 23:43:22.525948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 [2024-11-09 23:43:23.230014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.081 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.339 Malloc0 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.339 [2024-11-09 23:43:23.346761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3369871 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3369871 /var/tmp/bdevperf.sock 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3369871 ']' 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.339 23:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.339 [2024-11-09 23:43:23.432920] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:09:57.339 [2024-11-09 23:43:23.433092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369871 ] 00:09:57.598 [2024-11-09 23:43:23.572895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.598 [2024-11-09 23:43:23.695707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.531 NVMe0n1 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.531 23:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.789 Running I/O for 10 seconds... 00:10:00.657 6144.00 IOPS, 24.00 MiB/s [2024-11-09T22:43:27.791Z] 6126.00 IOPS, 23.93 MiB/s [2024-11-09T22:43:29.164Z] 6109.00 IOPS, 23.86 MiB/s [2024-11-09T22:43:30.099Z] 6051.00 IOPS, 23.64 MiB/s [2024-11-09T22:43:31.078Z] 6043.20 IOPS, 23.61 MiB/s [2024-11-09T22:43:32.014Z] 6062.33 IOPS, 23.68 MiB/s [2024-11-09T22:43:32.948Z] 6077.14 IOPS, 23.74 MiB/s [2024-11-09T22:43:33.881Z] 6071.00 IOPS, 23.71 MiB/s [2024-11-09T22:43:34.815Z] 6063.44 IOPS, 23.69 MiB/s [2024-11-09T22:43:35.073Z] 6054.80 IOPS, 23.65 MiB/s 00:10:08.872 Latency(us) 00:10:08.872 [2024-11-09T22:43:35.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.872 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:08.872 Verification LBA range: start 0x0 length 0x4000 00:10:08.872 NVMe0n1 : 10.09 6100.95 23.83 0.00 0.00 166886.71 9660.49 104857.60 00:10:08.872 [2024-11-09T22:43:35.073Z] =================================================================================================================== 00:10:08.872 [2024-11-09T22:43:35.073Z] Total : 6100.95 23.83 0.00 0.00 166886.71 9660.49 104857.60 00:10:08.872 { 00:10:08.872 "results": [ 00:10:08.872 { 00:10:08.872 "job": "NVMe0n1", 00:10:08.872 "core_mask": "0x1", 00:10:08.872 "workload": "verify", 00:10:08.872 "status": "finished", 00:10:08.872 "verify_range": { 00:10:08.872 "start": 0, 00:10:08.872 "length": 16384 00:10:08.872 }, 00:10:08.872 "queue_depth": 1024, 00:10:08.872 "io_size": 4096, 00:10:08.872 "runtime": 10.090066, 00:10:08.872 "iops": 6100.951173163783, 00:10:08.872 "mibps": 23.831840520171028, 00:10:08.872 "io_failed": 0, 00:10:08.872 "io_timeout": 0, 00:10:08.872 "avg_latency_us": 166886.70974882875, 00:10:08.872 "min_latency_us": 9660.491851851852, 00:10:08.872 "max_latency_us": 104857.6 00:10:08.872 } 00:10:08.872 ], 00:10:08.872 "core_count": 1 00:10:08.873 } 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3369871 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3369871 ']' 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3369871 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3369871 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3369871' 00:10:08.873 killing process with pid 3369871 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3369871 00:10:08.873 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.873 00:10:08.873 Latency(us) 00:10:08.873 [2024-11-09T22:43:35.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.873 [2024-11-09T22:43:35.074Z] =================================================================================================================== 00:10:08.873 [2024-11-09T22:43:35.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.873 23:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3369871 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.806 rmmod nvme_tcp 00:10:09.806 rmmod nvme_fabrics 00:10:09.806 rmmod nvme_keyring 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3369713 ']' 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3369713 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3369713 ']' 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3369713 00:10:09.806 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3369713 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3369713' 00:10:09.807 killing process with pid 3369713 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3369713 00:10:09.807 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3369713 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.183 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.718 00:10:13.718 real 0m19.636s 00:10:13.718 user 0m27.978s 00:10:13.718 sys 0m3.375s 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 ************************************ 00:10:13.718 END TEST nvmf_queue_depth 00:10:13.718 ************************************ 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 ************************************ 00:10:13.718 START TEST nvmf_target_multipath 00:10:13.718 ************************************ 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:13.718 * Looking for test storage... 00:10:13.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.718 --rc genhtml_branch_coverage=1 00:10:13.718 --rc genhtml_function_coverage=1 00:10:13.718 --rc genhtml_legend=1 00:10:13.718 --rc geninfo_all_blocks=1 00:10:13.718 --rc geninfo_unexecuted_blocks=1 00:10:13.718 00:10:13.718 ' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.718 --rc genhtml_branch_coverage=1 00:10:13.718 --rc genhtml_function_coverage=1 00:10:13.718 --rc genhtml_legend=1 00:10:13.718 --rc geninfo_all_blocks=1 00:10:13.718 --rc geninfo_unexecuted_blocks=1 00:10:13.718 00:10:13.718 ' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.718 --rc genhtml_branch_coverage=1 00:10:13.718 --rc genhtml_function_coverage=1 00:10:13.718 --rc genhtml_legend=1 00:10:13.718 --rc geninfo_all_blocks=1 00:10:13.718 --rc geninfo_unexecuted_blocks=1 00:10:13.718 00:10:13.718 ' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.718 --rc genhtml_branch_coverage=1 00:10:13.718 --rc genhtml_function_coverage=1 00:10:13.718 --rc genhtml_legend=1 00:10:13.718 --rc geninfo_all_blocks=1 00:10:13.718 --rc geninfo_unexecuted_blocks=1 00:10:13.718 00:10:13.718 ' 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.718 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.719 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:15.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.620 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:15.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:15.621 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:15.621 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.621 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:15.880 00:10:15.880 --- 10.0.0.2 ping statistics --- 00:10:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.880 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:15.880 00:10:15.880 --- 10.0.0.1 ping statistics --- 00:10:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.880 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:15.880 only one NIC for nvmf test 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.880 rmmod nvme_tcp 00:10:15.880 rmmod nvme_fabrics 00:10:15.880 rmmod nvme_keyring 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.880 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.781 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.781 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.782 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.041 00:10:18.041 real 0m4.606s 00:10:18.041 user 0m0.943s 00:10:18.041 sys 0m1.530s 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.041 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:18.041 ************************************ 00:10:18.041 END TEST nvmf_target_multipath 00:10:18.041 ************************************ 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.041 ************************************ 00:10:18.041 START TEST nvmf_zcopy 00:10:18.041 ************************************ 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.041 * Looking for test storage... 00:10:18.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.041 --rc genhtml_branch_coverage=1 00:10:18.041 --rc genhtml_function_coverage=1 00:10:18.041 --rc genhtml_legend=1 00:10:18.041 --rc geninfo_all_blocks=1 00:10:18.041 --rc geninfo_unexecuted_blocks=1 00:10:18.041 00:10:18.041 ' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.041 --rc genhtml_branch_coverage=1 00:10:18.041 --rc genhtml_function_coverage=1 00:10:18.041 --rc genhtml_legend=1 00:10:18.041 --rc geninfo_all_blocks=1 00:10:18.041 --rc geninfo_unexecuted_blocks=1 00:10:18.041 00:10:18.041 ' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.041 --rc genhtml_branch_coverage=1 00:10:18.041 --rc genhtml_function_coverage=1 00:10:18.041 --rc genhtml_legend=1 00:10:18.041 --rc geninfo_all_blocks=1 00:10:18.041 --rc geninfo_unexecuted_blocks=1 00:10:18.041 00:10:18.041 ' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.041 --rc genhtml_branch_coverage=1 00:10:18.041 --rc genhtml_function_coverage=1 00:10:18.041 --rc genhtml_legend=1 00:10:18.041 --rc geninfo_all_blocks=1 00:10:18.041 --rc geninfo_unexecuted_blocks=1 00:10:18.041 00:10:18.041 ' 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.041 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.042 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.577 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.577 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:20.578 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:20.578 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:20.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:20.578 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:10:20.578 00:10:20.578 --- 10.0.0.2 ping statistics --- 00:10:20.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.578 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:10:20.578 00:10:20.578 --- 10.0.0.1 ping statistics --- 00:10:20.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.578 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.578 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3375353 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3375353 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3375353 ']' 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.579 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.579 [2024-11-09 23:43:46.439167] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:10:20.579 [2024-11-09 23:43:46.439300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.579 [2024-11-09 23:43:46.585650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.579 [2024-11-09 23:43:46.719361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.579 [2024-11-09 23:43:46.719450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.579 [2024-11-09 23:43:46.719476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.579 [2024-11-09 23:43:46.719501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.579 [2024-11-09 23:43:46.719531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.579 [2024-11-09 23:43:46.721218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 [2024-11-09 23:43:47.419675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 [2024-11-09 23:43:47.435946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 malloc0 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:21.513 { 00:10:21.513 "params": { 00:10:21.513 "name": "Nvme$subsystem", 00:10:21.513 "trtype": "$TEST_TRANSPORT", 00:10:21.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.513 "adrfam": "ipv4", 00:10:21.513 "trsvcid": "$NVMF_PORT", 00:10:21.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.513 "hdgst": ${hdgst:-false}, 00:10:21.513 "ddgst": ${ddgst:-false} 00:10:21.513 }, 00:10:21.513 "method": "bdev_nvme_attach_controller" 00:10:21.513 } 00:10:21.513 EOF 00:10:21.513 )") 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:21.513 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:21.513 "params": { 00:10:21.513 "name": "Nvme1", 00:10:21.513 "trtype": "tcp", 00:10:21.513 "traddr": "10.0.0.2", 00:10:21.513 "adrfam": "ipv4", 00:10:21.513 "trsvcid": "4420", 00:10:21.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.513 "hdgst": false, 00:10:21.513 "ddgst": false 00:10:21.513 }, 00:10:21.513 "method": "bdev_nvme_attach_controller" 00:10:21.513 }' 00:10:21.513 [2024-11-09 23:43:47.584735] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:10:21.513 [2024-11-09 23:43:47.584882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375508 ] 00:10:21.771 [2024-11-09 23:43:47.731826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.771 [2024-11-09 23:43:47.869978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.337 Running I/O for 10 seconds... 00:10:24.204 4288.00 IOPS, 33.50 MiB/s [2024-11-09T22:43:51.338Z] 4309.00 IOPS, 33.66 MiB/s [2024-11-09T22:43:52.711Z] 4316.00 IOPS, 33.72 MiB/s [2024-11-09T22:43:53.646Z] 4321.50 IOPS, 33.76 MiB/s [2024-11-09T22:43:54.582Z] 4323.20 IOPS, 33.77 MiB/s [2024-11-09T22:43:55.517Z] 4315.00 IOPS, 33.71 MiB/s [2024-11-09T22:43:56.452Z] 4308.00 IOPS, 33.66 MiB/s [2024-11-09T22:43:57.387Z] 4311.00 IOPS, 33.68 MiB/s [2024-11-09T22:43:58.763Z] 4306.89 IOPS, 33.65 MiB/s [2024-11-09T22:43:58.763Z] 4296.30 IOPS, 33.56 MiB/s 00:10:32.562 Latency(us) 00:10:32.562 [2024-11-09T22:43:58.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.562 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:32.562 Verification LBA range: start 0x0 length 0x1000 00:10:32.562 Nvme1n1 : 10.02 4300.55 33.60 0.00 0.00 29685.22 3519.53 36894.34 00:10:32.562 [2024-11-09T22:43:58.763Z] =================================================================================================================== 00:10:32.562 [2024-11-09T22:43:58.763Z] Total : 4300.55 33.60 0.00 0.00 29685.22 3519.53 36894.34 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3376966 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.129 { 00:10:33.129 "params": { 00:10:33.129 "name": "Nvme$subsystem", 00:10:33.129 "trtype": "$TEST_TRANSPORT", 00:10:33.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.129 "adrfam": "ipv4", 00:10:33.129 "trsvcid": "$NVMF_PORT", 00:10:33.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.129 "hdgst": ${hdgst:-false}, 00:10:33.129 "ddgst": ${ddgst:-false} 00:10:33.129 }, 00:10:33.129 "method": "bdev_nvme_attach_controller" 00:10:33.129 } 00:10:33.129 EOF 00:10:33.129 )") 00:10:33.129 [2024-11-09 23:43:59.283520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.129 [2024-11-09 23:43:59.283605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:33.129 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.129 "params": { 00:10:33.129 "name": "Nvme1", 00:10:33.129 "trtype": "tcp", 00:10:33.129 "traddr": "10.0.0.2", 00:10:33.129 "adrfam": "ipv4", 00:10:33.129 "trsvcid": "4420", 00:10:33.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.129 "hdgst": false, 00:10:33.129 "ddgst": false 00:10:33.129 }, 00:10:33.129 "method": "bdev_nvme_attach_controller" 00:10:33.129 }' 00:10:33.129 [2024-11-09 23:43:59.291463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.129 [2024-11-09 23:43:59.291495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.129 [2024-11-09 23:43:59.299460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.129 [2024-11-09 23:43:59.299487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.129 [2024-11-09 23:43:59.307497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.129 [2024-11-09 23:43:59.307524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.129 [2024-11-09 23:43:59.315533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.129 [2024-11-09 23:43:59.315560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.129 [2024-11-09 23:43:59.323525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.129 [2024-11-09 23:43:59.323552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.331615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.331646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.339615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.339659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.347631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.347661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.355657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.355687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.363669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.363699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.364322] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:10:33.388 [2024-11-09 23:43:59.364452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376966 ] 00:10:33.388 [2024-11-09 23:43:59.371710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.371740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.379727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.379756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.387734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.387762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.395767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.395796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.403803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.403836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.411833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.411866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.419850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.419883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.427864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.427896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.435915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.435948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.443953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.443988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.451947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.451980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.459995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.460029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.468009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.468043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.476021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.476054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.484057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.484091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.492074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.492108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.500111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.500143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.506240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.388 [2024-11-09 23:43:59.508153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.508186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.516146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.516179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.524228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.524271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.532273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.532322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.540206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.540239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.548246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.548280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.556252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.556285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.564301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.564334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.572313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.388 [2024-11-09 23:43:59.572345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.388 [2024-11-09 23:43:59.580323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.389 [2024-11-09 23:43:59.580356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.389 [2024-11-09 23:43:59.588373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.389 [2024-11-09 23:43:59.588406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.646 [2024-11-09 23:43:59.596388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.646 [2024-11-09 23:43:59.596422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.646 [2024-11-09 23:43:59.604414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.646 [2024-11-09 23:43:59.604456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.612432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.612465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.620441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.620473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.628509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.628542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.636501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.636534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.644509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.644542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.647841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.647 [2024-11-09 23:43:59.652556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.652599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.660584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.660626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.668662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.668706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.676713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.676758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.684656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.684684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.692686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.692714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.700714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.700741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.708710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.708739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.716738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.716767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.724758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.724802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.732754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.732782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.740873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.740923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.748858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.748923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.756925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.756990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.764942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.764994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.772925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.772964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.780938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.780972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.788955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.788988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.796993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.797027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.805011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.805043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.813012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.813045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.821059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.821092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.829081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.829113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.837079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.837111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.647 [2024-11-09 23:43:59.845124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.647 [2024-11-09 23:43:59.845158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.853147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.853182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.861157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.861190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.869191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.869224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.877199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.877233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.885241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.885275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.893285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.893321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.901338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.901396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.909388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.909439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.917348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.917387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.925338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.925371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.933375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.933408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.941387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.941430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.949425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.949459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.957444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.957478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.965453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.965485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.973485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.973518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.981506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.981538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.989540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.989573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:43:59.997550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:43:59.997583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.005562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.005606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.013673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.013710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.021710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.021747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.029687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.029718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.037732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.037766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.045722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.045757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.053766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.053806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.061748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.061781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.069778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.069811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.077814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.077846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.085819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.085851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.093889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.093922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.905 [2024-11-09 23:44:00.101941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.905 [2024-11-09 23:44:00.101977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.109959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.109995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 Running I/O for 5 seconds... 00:10:34.164 [2024-11-09 23:44:00.117978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.118016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.135351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.135388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.150254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.150294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.165179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.165214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.180141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.180177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.195739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.195775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.210691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.210728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.225056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.225096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.239743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.239780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.254865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.254902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.269146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.269186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.283881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.283925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.298755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.298791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.313318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.313370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.327553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.327602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.342125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.342175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.164 [2024-11-09 23:44:00.356833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.164 [2024-11-09 23:44:00.356870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.371701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.371738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.385947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.385983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.400576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.400639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.414736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.414772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.428643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.428679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.442582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.442646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.457038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.457090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.471614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.471665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.485761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.485802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.500145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.500181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.514026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.514060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.528286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.528322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.542925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.542966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.557805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.557842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.572863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.572913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.588328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.588369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.603752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.603789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.423 [2024-11-09 23:44:00.617523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.423 [2024-11-09 23:44:00.617573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.632167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.632218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.646791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.646828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.660899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.660936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.675378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.675414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.689873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.689923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.704220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.704257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.718842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.718892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.732923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.732974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.747207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.747259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.761875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.761926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.776315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.776351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.790649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.790685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.805304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.805340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.819383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.819419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.833531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.833567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.847705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.847741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.861952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.862003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.682 [2024-11-09 23:44:00.875412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.682 [2024-11-09 23:44:00.875447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-09 23:44:00.889916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-09 23:44:00.889953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-09 23:44:00.904857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.904893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:00.919954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.919990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:00.935430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.935470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:00.951279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.951318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:00.966482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.966537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:00.981960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.981999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:00.997173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:00.997213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.012257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.012297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.026684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.026720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.041360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.041401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.056080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.056120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.070943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.070984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.085893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.085932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.101124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.101164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 [2024-11-09 23:44:01.114984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.115024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.941 8626.00 IOPS, 67.39 MiB/s [2024-11-09T22:44:01.142Z] [2024-11-09 23:44:01.129983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.941 [2024-11-09 23:44:01.130023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.144954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.144994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.160300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.160339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.173381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.173420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.188198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.188238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.204139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.204190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.219472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.219512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.234714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.234750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.249948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.249988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.264309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.264349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.278751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.278787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.293542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.293581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.308287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.308328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.323169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.323210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.338264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.338304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.354167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.354208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.369296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.369336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.384723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.384780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.200 [2024-11-09 23:44:01.399847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.200 [2024-11-09 23:44:01.399894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.415141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.415181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.430326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.430366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.445458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.445499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.461214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.461255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.476553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.476603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.491541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.491600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.507030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.507070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.521832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.521871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.537398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.537438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.551876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.551917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.566687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.566722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.581460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.581499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.596378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.596417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.611064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.611115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.625987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.626029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.640744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.640779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.459 [2024-11-09 23:44:01.655399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.459 [2024-11-09 23:44:01.655439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.670515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.670564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.685792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.685828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.700972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.701023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.715529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.715564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.730320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.730359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.744872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.744912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.759955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.759994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.774777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.774812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.789115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.789154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.803900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.803940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.818897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.818948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.833926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.833966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.848201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.848240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.863295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.863335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.877639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.877675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.892875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.892916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.718 [2024-11-09 23:44:01.908057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.718 [2024-11-09 23:44:01.908097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:01.920493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:01.920533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:01.933867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:01.933917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:01.947761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:01.947805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:01.962368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:01.962407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:01.977882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:01.977923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:01.990534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:01.990574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.005105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.005145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.020584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.020635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.035515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.035555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.050349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.050388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.064616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.064667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.079328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.079377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.093783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.093820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.108696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.108746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.123301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.123340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 8579.00 IOPS, 67.02 MiB/s [2024-11-09T22:44:02.178Z] [2024-11-09 23:44:02.137838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.137892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.152063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.152113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.977 [2024-11-09 23:44:02.166362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.977 [2024-11-09 23:44:02.166398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.180508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.180547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.194866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.194916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.209933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.209974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.225340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.225381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.240296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.240336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.256269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.256309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.271349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.271388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.286514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.286555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.301011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.301051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.315744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.315779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.330216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.330255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.345252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.345305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.360416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.360456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.374963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.375003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.390270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.390310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.405065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.405104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.420180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.420219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.236 [2024-11-09 23:44:02.435446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.236 [2024-11-09 23:44:02.435486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.450452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.450508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.464934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.464975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.479770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.479807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.494046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.494082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.508897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.508937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.525280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.525320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.540925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.540969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.556004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.556045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.570654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.570691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.585719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.585754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.600451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.600491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.615618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.615672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.630928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.630968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.647117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.647157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.662546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.662605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.677609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.677648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.496 [2024-11-09 23:44:02.692296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.496 [2024-11-09 23:44:02.692336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.707564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.707614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.722976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.723012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.737445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.737484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.752209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.752248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.766919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.766972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.782203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.782242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.796887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.796926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.811691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.811726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.825923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.825963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.840808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.840844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.855685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.855720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.870459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.870514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.885104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.885143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.899605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.899658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.914210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.914250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.929196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.929237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.755 [2024-11-09 23:44:02.944167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.755 [2024-11-09 23:44:02.944207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:02.959524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:02.959564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:02.974592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:02.974645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:02.989853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:02.989892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.004373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.004412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.018929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.018968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.033201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.033251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.047740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.047775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.062648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.062692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.076864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.076915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.091152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.091188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.105159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.105195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.118967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.119007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 8569.33 IOPS, 66.95 MiB/s [2024-11-09T22:44:03.215Z] [2024-11-09 23:44:03.133635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.133670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.149005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.149044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.164214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.164253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.179381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.179420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.194980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.195020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.014 [2024-11-09 23:44:03.210354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.014 [2024-11-09 23:44:03.210394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.222818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.222854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.236112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.236152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.250667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.250705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.265785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.265820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.281383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.281423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.295915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.295967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.310261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.310300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.325380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.325419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.340744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.340788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.355479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.355519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.370044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.370083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.384907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.384946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.400037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.400071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.414519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.414558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.429641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.429677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.445117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.445156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.273 [2024-11-09 23:44:03.460233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.273 [2024-11-09 23:44:03.460273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.475581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.475645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.490565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.490637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.504998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.505034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.520250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.520289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.535272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.535325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.550230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.550270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.562430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.562469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.577146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.577186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.591515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.591554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.607001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.607042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.619719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.619765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.634347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.634387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.649415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.649455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.664312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.664352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.679844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.679884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.695027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.695067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.710308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.710348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.532 [2024-11-09 23:44:03.724987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.532 [2024-11-09 23:44:03.725026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.739872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.739911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.755296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.755335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.767901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.767942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.783059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.783099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.797968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.798007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.812392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.812431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.827323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.827363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.841436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.841486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.856502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.856542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.871988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.872028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.887355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.887394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.902351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.902390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.917459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.917498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.932496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.932536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.948287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.948327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.963678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.963717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.791 [2024-11-09 23:44:03.978817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.791 [2024-11-09 23:44:03.978856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:03.993862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:03.993902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.008881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.008920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.024048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.024087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.038972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.039011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.053929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.053968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.068597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.068636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.083777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.083817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.098833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.098872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.113802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.113842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.128585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.128633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 8543.25 IOPS, 66.74 MiB/s [2024-11-09T22:44:04.251Z] [2024-11-09 23:44:04.143712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.143751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.158539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.158578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.173249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.173288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.188080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.188120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.203329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.203368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.215625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.215664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.229216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.229254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.050 [2024-11-09 23:44:04.244488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.050 [2024-11-09 23:44:04.244541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.260364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.260410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.277308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.277349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.293097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.293138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.308635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.308675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.323932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.323971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.338267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.338306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.353186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.353226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.368373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.368412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.384074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.384114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.399250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.399289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.414359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.414398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.429872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.429913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.442382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.442421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.457094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.457134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.472303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.472342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.486881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.486920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.321 [2024-11-09 23:44:04.502096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.321 [2024-11-09 23:44:04.502136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.518709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.518750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.535368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.535408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.551339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.551379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.567551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.567601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.583201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.583242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.598212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.598251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.613253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.613290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.626994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.627028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.641780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.641816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.655968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.656004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.670411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.670461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.685061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.685115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.699795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.699845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.713895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.713946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.728800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.728836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.743171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.743229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.757244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.757298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.770906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.770943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.609 [2024-11-09 23:44:04.785472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.609 [2024-11-09 23:44:04.785509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.800124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.800163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.814206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.814243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.828283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.828320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.842517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.842554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.857117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.857152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.870983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.871019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.885201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.885253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.899556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.899601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.913400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.913436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.927597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.927663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.943109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.943162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.955990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.956031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.971287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.971327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:04.986059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:04.986098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:05.001537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:05.001576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:05.014117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:05.014167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:05.027769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:05.027805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:05.042435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:05.042474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:05.057143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:05.057183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.879 [2024-11-09 23:44:05.072327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.879 [2024-11-09 23:44:05.072363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.087758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.087794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.101133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.101172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.115624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.115664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.130707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.130744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 8535.60 IOPS, 66.68 MiB/s [2024-11-09T22:44:05.339Z] [2024-11-09 23:44:05.144144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.144183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 00:10:39.138 Latency(us) 00:10:39.138 [2024-11-09T22:44:05.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.138 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:39.138 Nvme1n1 : 5.02 8534.64 66.68 0.00 0.00 14971.92 4102.07 25437.68 00:10:39.138 [2024-11-09T22:44:05.339Z] =================================================================================================================== 00:10:39.138 [2024-11-09T22:44:05.339Z] Total : 8534.64 66.68 0.00 0.00 14971.92 4102.07 25437.68 00:10:39.138 [2024-11-09 23:44:05.149847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.149898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.157852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.157906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.165864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.165928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.173867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.173913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.181932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.181968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.189934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.189967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.198100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.198158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.206096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.206154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.214074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.214126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.222058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.222091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.230071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.230105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.238068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.238101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.246147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.246181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.254129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.254162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.262210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.262244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.270194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.270227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.278206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.278239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.286227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.286256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.294378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.294435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.302371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.302429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.310380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.138 [2024-11-09 23:44:05.310424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.138 [2024-11-09 23:44:05.318335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.139 [2024-11-09 23:44:05.318367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.139 [2024-11-09 23:44:05.326373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.139 [2024-11-09 23:44:05.326406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.139 [2024-11-09 23:44:05.334407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.139 [2024-11-09 23:44:05.334441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.397 [2024-11-09 23:44:05.342434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.397 [2024-11-09 23:44:05.342471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.350440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.350475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.358464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.358496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.366473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.366505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.374507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.374539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.382511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.382543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.390553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.390594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.398580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.398635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.406621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.406666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.414646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.414674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.422664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.422692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.430692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.430719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.438716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.438744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.446712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.446743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.454757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.454792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.462895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.462955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.470797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.470834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.478817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.478846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.486837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.486867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.494846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.494876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.502904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.502937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.514947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.514980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.523099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.523161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.531113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.531171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.539120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.539177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.547124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.547177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.555065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.555099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.563062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.563095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.571114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.571147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.579119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.579152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.587150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.587183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.398 [2024-11-09 23:44:05.595180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.398 [2024-11-09 23:44:05.595214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.603182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.603215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.611225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.611258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.619244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.619276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.627264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.627296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.635287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.635320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.643287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.643320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.651329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.651362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.659356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.659391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.667359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.667391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.675402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.675435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.683420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.683464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.691425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.691458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.699510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.699550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.707555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.707649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.715527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.715562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.723534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.723567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.731562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.731607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.739581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.739637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.747615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.747672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.755644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.755673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.763669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.763697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.771671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.771698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.779711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.779740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.787716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.787745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.795741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.657 [2024-11-09 23:44:05.795769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.657 [2024-11-09 23:44:05.803754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.803789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.658 [2024-11-09 23:44:05.811784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.811812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.658 [2024-11-09 23:44:05.819780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.819808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.658 [2024-11-09 23:44:05.827991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.828049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.658 [2024-11-09 23:44:05.835842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.835889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.658 [2024-11-09 23:44:05.843885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.843918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.658 [2024-11-09 23:44:05.851910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.658 [2024-11-09 23:44:05.851954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.859922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.859968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.867945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.867974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.875983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.876016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.883996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.884028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.892037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.892070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.900030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.900063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.908066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.908100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.916083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.916116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.924224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.924285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.932135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.932168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.940158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.940191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.948170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.948203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.956206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.956247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.964208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.964241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.972249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.972282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.980275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.980308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.988281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.988314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:05.996334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:05.996366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.004346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.004387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.012353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.012386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.020419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.020455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.028402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.028434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.036436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.036469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.044452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.044484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 [2024-11-09 23:44:06.052461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.917 [2024-11-09 23:44:06.052494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3376966) - No such process 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3376966 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.917 delay0 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.917 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:40.176 [2024-11-09 23:44:06.196216] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:46.736 Initializing NVMe Controllers 00:10:46.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:46.736 Initialization complete. Launching workers. 00:10:46.736 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 56 00:10:46.736 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 343, failed to submit 33 00:10:46.736 success 146, unsuccessful 197, failed 0 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.736 rmmod nvme_tcp 00:10:46.736 rmmod nvme_fabrics 00:10:46.736 rmmod nvme_keyring 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3375353 ']' 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3375353 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3375353 ']' 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3375353 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3375353 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3375353' 00:10:46.736 killing process with pid 3375353 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3375353 00:10:46.736 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3375353 00:10:47.670 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.670 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.670 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.670 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:47.670 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.670 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:47.671 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.671 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.671 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.671 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.671 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.671 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.205 00:10:50.205 real 0m31.837s 00:10:50.205 user 0m47.795s 00:10:50.205 sys 0m8.013s 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.205 ************************************ 00:10:50.205 END TEST nvmf_zcopy 00:10:50.205 ************************************ 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.205 ************************************ 00:10:50.205 START TEST nvmf_nmic 00:10:50.205 ************************************ 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:50.205 * Looking for test storage... 00:10:50.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.205 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.205 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.206 --rc genhtml_branch_coverage=1 00:10:50.206 --rc genhtml_function_coverage=1 00:10:50.206 --rc genhtml_legend=1 00:10:50.206 --rc geninfo_all_blocks=1 00:10:50.206 --rc geninfo_unexecuted_blocks=1 00:10:50.206 00:10:50.206 ' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.206 --rc genhtml_branch_coverage=1 00:10:50.206 --rc genhtml_function_coverage=1 00:10:50.206 --rc genhtml_legend=1 00:10:50.206 --rc geninfo_all_blocks=1 00:10:50.206 --rc geninfo_unexecuted_blocks=1 00:10:50.206 00:10:50.206 ' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.206 --rc genhtml_branch_coverage=1 00:10:50.206 --rc genhtml_function_coverage=1 00:10:50.206 --rc genhtml_legend=1 00:10:50.206 --rc geninfo_all_blocks=1 00:10:50.206 --rc geninfo_unexecuted_blocks=1 00:10:50.206 00:10:50.206 ' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.206 --rc genhtml_branch_coverage=1 00:10:50.206 --rc genhtml_function_coverage=1 00:10:50.206 --rc genhtml_legend=1 00:10:50.206 --rc geninfo_all_blocks=1 00:10:50.206 --rc geninfo_unexecuted_blocks=1 00:10:50.206 00:10:50.206 ' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.206 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.207 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.207 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.207 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.207 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.207 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.207 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:52.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:52.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:52.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:52.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:52.111 00:10:52.111 --- 10.0.0.2 ping statistics --- 00:10:52.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.111 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:52.111 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:10:52.370 00:10:52.370 --- 10.0.0.1 ping statistics --- 00:10:52.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.370 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3380515 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3380515 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3380515 ']' 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.370 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.370 [2024-11-09 23:44:18.433783] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:10:52.370 [2024-11-09 23:44:18.433959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.629 [2024-11-09 23:44:18.597415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.629 [2024-11-09 23:44:18.742189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.629 [2024-11-09 23:44:18.742274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.629 [2024-11-09 23:44:18.742301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.629 [2024-11-09 23:44:18.742326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.629 [2024-11-09 23:44:18.742346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.629 [2024-11-09 23:44:18.745266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.629 [2024-11-09 23:44:18.745325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.629 [2024-11-09 23:44:18.745377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.629 [2024-11-09 23:44:18.745384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 [2024-11-09 23:44:19.478562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 Malloc0 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 [2024-11-09 23:44:19.601117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:53.564 test case1: single bdev can't be used in multiple subsystems 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.564 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.565 [2024-11-09 23:44:19.624823] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:53.565 [2024-11-09 23:44:19.624866] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:53.565 [2024-11-09 23:44:19.624909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.565 request: 00:10:53.565 { 00:10:53.565 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:53.565 "namespace": { 00:10:53.565 "bdev_name": "Malloc0", 00:10:53.565 "no_auto_visible": false 00:10:53.565 }, 00:10:53.565 "method": "nvmf_subsystem_add_ns", 00:10:53.565 "req_id": 1 00:10:53.565 } 00:10:53.565 Got JSON-RPC error response 00:10:53.565 response: 00:10:53.565 { 00:10:53.565 "code": -32602, 00:10:53.565 "message": "Invalid parameters" 00:10:53.565 } 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:53.565 Adding namespace failed - expected result. 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:53.565 test case2: host connect to nvmf target in multiple paths 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.565 [2024-11-09 23:44:19.632971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.565 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.132 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:55.066 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.066 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:55.066 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.066 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:55.066 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:56.965 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:56.965 [global] 00:10:56.965 thread=1 00:10:56.965 invalidate=1 00:10:56.965 rw=write 00:10:56.965 time_based=1 00:10:56.965 runtime=1 00:10:56.965 ioengine=libaio 00:10:56.965 direct=1 00:10:56.965 bs=4096 00:10:56.965 iodepth=1 00:10:56.965 norandommap=0 00:10:56.965 numjobs=1 00:10:56.965 00:10:56.965 verify_dump=1 00:10:56.965 verify_backlog=512 00:10:56.965 verify_state_save=0 00:10:56.965 do_verify=1 00:10:56.965 verify=crc32c-intel 00:10:56.965 [job0] 00:10:56.965 filename=/dev/nvme0n1 00:10:56.965 Could not set queue depth (nvme0n1) 00:10:56.965 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.965 fio-3.35 00:10:56.965 Starting 1 thread 00:10:58.338 00:10:58.338 job0: (groupid=0, jobs=1): err= 0: pid=3381273: Sat Nov 9 23:44:24 2024 00:10:58.338 read: IOPS=1532, BW=6130KiB/s (6277kB/s)(6148KiB/1003msec) 00:10:58.338 slat (nsec): min=4540, max=55683, avg=14333.58, stdev=5757.52 00:10:58.338 clat (usec): min=225, max=41424, avg=334.79, stdev=1474.96 00:10:58.338 lat (usec): min=231, max=41438, avg=349.12, stdev=1475.05 00:10:58.338 clat percentiles (usec): 00:10:58.338 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:10:58.338 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:10:58.338 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 334], 00:10:58.338 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[41157], 99.95th=[41681], 00:10:58.338 | 99.99th=[41681] 00:10:58.338 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:10:58.338 slat (usec): min=5, max=28673, avg=28.56, stdev=633.33 00:10:58.338 clat (usec): min=152, max=440, avg=191.79, stdev=27.29 00:10:58.338 lat (usec): min=159, max=29006, avg=220.35, stdev=637.29 00:10:58.339 clat percentiles (usec): 00:10:58.339 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:10:58.339 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 192], 00:10:58.339 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 233], 00:10:58.339 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 392], 99.95th=[ 429], 00:10:58.339 | 99.99th=[ 441] 00:10:58.339 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=2 00:10:58.339 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:58.339 lat (usec) : 250=61.53%, 500=38.16%, 750=0.25% 00:10:58.339 lat (msec) : 50=0.06% 00:10:58.339 cpu : usr=3.09%, sys=6.49%, ctx=3588, majf=0, minf=1 00:10:58.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.339 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.339 00:10:58.339 Run status group 0 (all jobs): 00:10:58.339 READ: bw=6130KiB/s (6277kB/s), 6130KiB/s-6130KiB/s (6277kB/s-6277kB/s), io=6148KiB (6296kB), run=1003-1003msec 00:10:58.339 WRITE: bw=8167KiB/s (8364kB/s), 8167KiB/s-8167KiB/s (8364kB/s-8364kB/s), io=8192KiB (8389kB), run=1003-1003msec 00:10:58.339 00:10:58.339 Disk stats (read/write): 00:10:58.339 nvme0n1: ios=1588/1735, merge=0/0, ticks=857/309, in_queue=1166, util=98.60% 00:10:58.339 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.598 rmmod nvme_tcp 00:10:58.598 rmmod nvme_fabrics 00:10:58.598 rmmod nvme_keyring 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3380515 ']' 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3380515 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3380515 ']' 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3380515 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3380515 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3380515' 00:10:58.598 killing process with pid 3380515 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3380515 00:10:58.598 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3380515 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.973 23:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.874 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.874 00:11:01.874 real 0m12.132s 00:11:01.874 user 0m28.806s 00:11:01.874 sys 0m2.866s 00:11:01.874 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.874 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.874 ************************************ 00:11:01.874 END TEST nvmf_nmic 00:11:01.874 ************************************ 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.134 ************************************ 00:11:02.134 START TEST nvmf_fio_target 00:11:02.134 ************************************ 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:02.134 * Looking for test storage... 00:11:02.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.134 --rc genhtml_branch_coverage=1 00:11:02.134 --rc genhtml_function_coverage=1 00:11:02.134 --rc genhtml_legend=1 00:11:02.134 --rc geninfo_all_blocks=1 00:11:02.134 --rc geninfo_unexecuted_blocks=1 00:11:02.134 00:11:02.134 ' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.134 --rc genhtml_branch_coverage=1 00:11:02.134 --rc genhtml_function_coverage=1 00:11:02.134 --rc genhtml_legend=1 00:11:02.134 --rc geninfo_all_blocks=1 00:11:02.134 --rc geninfo_unexecuted_blocks=1 00:11:02.134 00:11:02.134 ' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.134 --rc genhtml_branch_coverage=1 00:11:02.134 --rc genhtml_function_coverage=1 00:11:02.134 --rc genhtml_legend=1 00:11:02.134 --rc geninfo_all_blocks=1 00:11:02.134 --rc geninfo_unexecuted_blocks=1 00:11:02.134 00:11:02.134 ' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.134 --rc genhtml_branch_coverage=1 00:11:02.134 --rc genhtml_function_coverage=1 00:11:02.134 --rc genhtml_legend=1 00:11:02.134 --rc geninfo_all_blocks=1 00:11:02.134 --rc geninfo_unexecuted_blocks=1 00:11:02.134 00:11:02.134 ' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.134 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.135 23:44:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.663 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:04.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:04.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:04.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:04.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:11:04.664 00:11:04.664 --- 10.0.0.2 ping statistics --- 00:11:04.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.664 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:04.664 00:11:04.664 --- 10.0.0.1 ping statistics --- 00:11:04.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.664 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3383500 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3383500 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3383500 ']' 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.664 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.664 [2024-11-09 23:44:30.579125] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:11:04.664 [2024-11-09 23:44:30.579283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.664 [2024-11-09 23:44:30.741155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.923 [2024-11-09 23:44:30.886074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.923 [2024-11-09 23:44:30.886137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.923 [2024-11-09 23:44:30.886163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.923 [2024-11-09 23:44:30.886187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.923 [2024-11-09 23:44:30.886207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.923 [2024-11-09 23:44:30.889074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.923 [2024-11-09 23:44:30.889134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.923 [2024-11-09 23:44:30.889165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.923 [2024-11-09 23:44:30.889157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.489 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.748 [2024-11-09 23:44:31.820755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.748 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.314 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:06.314 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.572 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:06.572 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.830 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:06.830 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.089 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:07.089 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:07.347 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.913 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:07.913 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.170 23:44:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:08.170 23:44:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.428 23:44:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:08.428 23:44:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:08.686 23:44:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.944 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:08.944 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.202 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:09.202 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:09.460 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.717 [2024-11-09 23:44:35.851520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.717 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:09.975 23:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:10.233 23:44:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.167 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:11.167 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:11:11.167 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.167 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:11:11.167 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:11:11.167 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:11:13.063 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:13.063 [global] 00:11:13.063 thread=1 00:11:13.063 invalidate=1 00:11:13.063 rw=write 00:11:13.063 time_based=1 00:11:13.063 runtime=1 00:11:13.063 ioengine=libaio 00:11:13.063 direct=1 00:11:13.063 bs=4096 00:11:13.063 iodepth=1 00:11:13.063 norandommap=0 00:11:13.063 numjobs=1 00:11:13.063 00:11:13.063 verify_dump=1 00:11:13.063 verify_backlog=512 00:11:13.063 verify_state_save=0 00:11:13.063 do_verify=1 00:11:13.063 verify=crc32c-intel 00:11:13.063 [job0] 00:11:13.063 filename=/dev/nvme0n1 00:11:13.063 [job1] 00:11:13.063 filename=/dev/nvme0n2 00:11:13.063 [job2] 00:11:13.063 filename=/dev/nvme0n3 00:11:13.063 [job3] 00:11:13.063 filename=/dev/nvme0n4 00:11:13.063 Could not set queue depth (nvme0n1) 00:11:13.063 Could not set queue depth (nvme0n2) 00:11:13.063 Could not set queue depth (nvme0n3) 00:11:13.063 Could not set queue depth (nvme0n4) 00:11:13.321 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.321 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.321 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.321 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.321 fio-3.35 00:11:13.321 Starting 4 threads 00:11:14.705 00:11:14.705 job0: (groupid=0, jobs=1): err= 0: pid=3384708: Sat Nov 9 23:44:40 2024 00:11:14.705 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:14.705 slat (nsec): min=5552, max=44653, avg=10184.31, stdev=5564.50 00:11:14.705 clat (usec): min=227, max=41826, avg=676.78, stdev=3812.28 00:11:14.705 lat (usec): min=233, max=41836, avg=686.96, stdev=3813.68 00:11:14.705 clat percentiles (usec): 00:11:14.705 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 269], 00:11:14.705 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:11:14.705 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 404], 95.00th=[ 490], 00:11:14.705 | 99.00th=[ 685], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:14.705 | 99.99th=[41681] 00:11:14.705 write: IOPS=1074, BW=4300KiB/s (4403kB/s)(4304KiB/1001msec); 0 zone resets 00:11:14.705 slat (nsec): min=7662, max=80141, avg=18025.55, stdev=8541.63 00:11:14.705 clat (usec): min=186, max=676, avg=249.99, stdev=35.99 00:11:14.705 lat (usec): min=195, max=685, avg=268.01, stdev=38.87 00:11:14.705 clat percentiles (usec): 00:11:14.705 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:11:14.705 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:11:14.705 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:11:14.705 | 99.00th=[ 396], 99.50th=[ 424], 99.90th=[ 603], 99.95th=[ 676], 00:11:14.705 | 99.99th=[ 676] 00:11:14.705 bw ( KiB/s): min= 4096, max= 4096, per=26.26%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.705 lat (usec) : 250=29.76%, 500=68.14%, 750=1.62% 00:11:14.705 lat (msec) : 4=0.05%, 50=0.43% 00:11:14.705 cpu : usr=2.80%, sys=3.30%, ctx=2101, majf=0, minf=1 00:11:14.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.705 issued rwts: total=1024,1076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.705 job1: (groupid=0, jobs=1): err= 0: pid=3384709: Sat Nov 9 23:44:40 2024 00:11:14.705 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:14.705 slat (nsec): min=5489, max=37384, avg=10008.42, stdev=5251.04 00:11:14.705 clat (usec): min=218, max=41061, avg=629.23, stdev=3796.09 00:11:14.705 lat (usec): min=224, max=41074, avg=639.24, stdev=3796.83 00:11:14.705 clat percentiles (usec): 00:11:14.705 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:11:14.705 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:11:14.705 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:11:14.705 | 99.00th=[ 644], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.705 | 99.99th=[41157] 00:11:14.706 write: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec); 0 zone resets 00:11:14.706 slat (nsec): min=7157, max=55706, avg=15740.88, stdev=7941.58 00:11:14.706 clat (usec): min=166, max=982, avg=223.04, stdev=34.41 00:11:14.706 lat (usec): min=174, max=995, avg=238.78, stdev=37.42 00:11:14.706 clat percentiles (usec): 00:11:14.706 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 204], 00:11:14.706 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:11:14.706 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 265], 00:11:14.706 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 635], 99.95th=[ 979], 00:11:14.706 | 99.99th=[ 979] 00:11:14.706 bw ( KiB/s): min= 4096, max= 4096, per=26.26%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.706 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.706 lat (usec) : 250=60.53%, 500=38.73%, 750=0.29%, 1000=0.04% 00:11:14.706 lat (msec) : 2=0.04%, 50=0.37% 00:11:14.706 cpu : usr=2.50%, sys=4.20%, ctx=2433, majf=0, minf=1 00:11:14.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.706 issued rwts: total=1024,1408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.706 job2: (groupid=0, jobs=1): err= 0: pid=3384710: Sat Nov 9 23:44:40 2024 00:11:14.706 read: IOPS=75, BW=303KiB/s (310kB/s)(312KiB/1031msec) 00:11:14.706 slat (nsec): min=6797, max=36763, avg=15965.88, stdev=7618.45 00:11:14.706 clat (usec): min=281, max=41320, avg=10692.20, stdev=17706.32 00:11:14.706 lat (usec): min=289, max=41339, avg=10708.17, stdev=17707.60 00:11:14.706 clat percentiles (usec): 00:11:14.706 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 314], 00:11:14.706 | 30.00th=[ 322], 40.00th=[ 351], 50.00th=[ 424], 60.00th=[ 445], 00:11:14.706 | 70.00th=[ 490], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:11:14.706 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.706 | 99.99th=[41157] 00:11:14.706 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:11:14.706 slat (nsec): min=7198, max=75832, avg=27536.50, stdev=11983.00 00:11:14.706 clat (usec): min=212, max=586, avg=346.87, stdev=79.09 00:11:14.706 lat (usec): min=230, max=603, avg=374.40, stdev=77.79 00:11:14.706 clat percentiles (usec): 00:11:14.706 | 1.00th=[ 223], 5.00th=[ 245], 10.00th=[ 260], 20.00th=[ 273], 00:11:14.706 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 326], 60.00th=[ 375], 00:11:14.706 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 457], 95.00th=[ 486], 00:11:14.706 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 586], 99.95th=[ 586], 00:11:14.706 | 99.99th=[ 586] 00:11:14.706 bw ( KiB/s): min= 4096, max= 4096, per=26.26%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.706 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.706 lat (usec) : 250=5.25%, 500=88.64%, 750=2.71% 00:11:14.706 lat (msec) : 50=3.39% 00:11:14.706 cpu : usr=0.68%, sys=1.65%, ctx=592, majf=0, minf=1 00:11:14.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.706 issued rwts: total=78,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.706 job3: (groupid=0, jobs=1): err= 0: pid=3384711: Sat Nov 9 23:44:40 2024 00:11:14.706 read: IOPS=755, BW=3021KiB/s (3093kB/s)(3024KiB/1001msec) 00:11:14.706 slat (nsec): min=6390, max=50487, avg=12246.14, stdev=6427.64 00:11:14.706 clat (usec): min=258, max=41234, avg=873.29, stdev=4406.46 00:11:14.706 lat (usec): min=265, max=41283, avg=885.54, stdev=4408.15 00:11:14.706 clat percentiles (usec): 00:11:14.706 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 314], 00:11:14.706 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 388], 00:11:14.706 | 70.00th=[ 441], 80.00th=[ 482], 90.00th=[ 523], 95.00th=[ 603], 00:11:14.706 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:14.706 | 99.99th=[41157] 00:11:14.706 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:14.706 slat (nsec): min=8051, max=79983, avg=21704.54, stdev=11439.24 00:11:14.706 clat (usec): min=192, max=558, avg=292.75, stdev=76.49 00:11:14.706 lat (usec): min=203, max=603, avg=314.45, stdev=81.00 00:11:14.706 clat percentiles (usec): 00:11:14.706 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:11:14.706 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 265], 60.00th=[ 289], 00:11:14.706 | 70.00th=[ 318], 80.00th=[ 351], 90.00th=[ 424], 95.00th=[ 449], 00:11:14.706 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 537], 99.95th=[ 562], 00:11:14.706 | 99.99th=[ 562] 00:11:14.706 bw ( KiB/s): min= 4096, max= 4096, per=26.26%, avg=4096.00, stdev= 0.00, samples=1 00:11:14.706 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:14.706 lat (usec) : 250=23.31%, 500=69.78%, 750=6.40% 00:11:14.706 lat (msec) : 50=0.51% 00:11:14.706 cpu : usr=2.70%, sys=3.60%, ctx=1781, majf=0, minf=1 00:11:14.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.706 issued rwts: total=756,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.706 00:11:14.706 Run status group 0 (all jobs): 00:11:14.706 READ: bw=10.9MiB/s (11.4MB/s), 303KiB/s-4092KiB/s (310kB/s-4190kB/s), io=11.3MiB (11.8MB), run=1001-1031msec 00:11:14.706 WRITE: bw=15.2MiB/s (16.0MB/s), 1986KiB/s-5626KiB/s (2034kB/s-5761kB/s), io=15.7MiB (16.5MB), run=1001-1031msec 00:11:14.706 00:11:14.706 Disk stats (read/write): 00:11:14.706 nvme0n1: ios=602/1024, merge=0/0, ticks=1422/236, in_queue=1658, util=85.87% 00:11:14.706 nvme0n2: ios=828/1024, merge=0/0, ticks=645/217, in_queue=862, util=91.16% 00:11:14.706 nvme0n3: ios=130/512, merge=0/0, ticks=1285/169, in_queue=1454, util=93.53% 00:11:14.706 nvme0n4: ios=576/873, merge=0/0, ticks=691/246, in_queue=937, util=95.80% 00:11:14.706 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:14.706 [global] 00:11:14.706 thread=1 00:11:14.706 invalidate=1 00:11:14.706 rw=randwrite 00:11:14.706 time_based=1 00:11:14.706 runtime=1 00:11:14.706 ioengine=libaio 00:11:14.706 direct=1 00:11:14.706 bs=4096 00:11:14.706 iodepth=1 00:11:14.706 norandommap=0 00:11:14.706 numjobs=1 00:11:14.706 00:11:14.706 verify_dump=1 00:11:14.706 verify_backlog=512 00:11:14.706 verify_state_save=0 00:11:14.706 do_verify=1 00:11:14.706 verify=crc32c-intel 00:11:14.706 [job0] 00:11:14.706 filename=/dev/nvme0n1 00:11:14.706 [job1] 00:11:14.706 filename=/dev/nvme0n2 00:11:14.706 [job2] 00:11:14.706 filename=/dev/nvme0n3 00:11:14.706 [job3] 00:11:14.706 filename=/dev/nvme0n4 00:11:14.706 Could not set queue depth (nvme0n1) 00:11:14.706 Could not set queue depth (nvme0n2) 00:11:14.706 Could not set queue depth (nvme0n3) 00:11:14.706 Could not set queue depth (nvme0n4) 00:11:14.706 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.706 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.706 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.706 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:14.706 fio-3.35 00:11:14.706 Starting 4 threads 00:11:16.104 00:11:16.104 job0: (groupid=0, jobs=1): err= 0: pid=3384941: Sat Nov 9 23:44:42 2024 00:11:16.104 read: IOPS=515, BW=2063KiB/s (2112kB/s)(2108KiB/1022msec) 00:11:16.104 slat (nsec): min=5548, max=33261, avg=9892.43, stdev=4647.53 00:11:16.104 clat (usec): min=229, max=41985, avg=1442.42, stdev=6804.26 00:11:16.104 lat (usec): min=236, max=41997, avg=1452.31, stdev=6804.85 00:11:16.104 clat percentiles (usec): 00:11:16.104 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 258], 00:11:16.104 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:11:16.104 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 347], 00:11:16.104 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:16.104 | 99.99th=[42206] 00:11:16.104 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:11:16.104 slat (nsec): min=6751, max=58645, avg=13255.31, stdev=6983.59 00:11:16.104 clat (usec): min=175, max=1616, avg=231.13, stdev=52.86 00:11:16.104 lat (usec): min=184, max=1625, avg=244.39, stdev=53.54 00:11:16.104 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 210], 00:11:16.105 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:11:16.105 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:11:16.105 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 848], 99.95th=[ 1614], 00:11:16.105 | 99.99th=[ 1614] 00:11:16.105 bw ( KiB/s): min= 8192, max= 8192, per=58.70%, avg=8192.00, stdev= 0.00, samples=1 00:11:16.105 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:16.105 lat (usec) : 250=58.09%, 500=40.81%, 1000=0.06% 00:11:16.105 lat (msec) : 2=0.06%, 50=0.97% 00:11:16.105 cpu : usr=1.37%, sys=2.45%, ctx=1551, majf=0, minf=1 00:11:16.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.105 job1: (groupid=0, jobs=1): err= 0: pid=3384942: Sat Nov 9 23:44:42 2024 00:11:16.105 read: IOPS=240, BW=963KiB/s (987kB/s)(976KiB/1013msec) 00:11:16.105 slat (nsec): min=5547, max=32884, avg=11789.31, stdev=5485.80 00:11:16.105 clat (usec): min=238, max=41336, avg=3622.24, stdev=11180.22 00:11:16.105 lat (usec): min=245, max=41348, avg=3634.02, stdev=11180.65 00:11:16.105 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 269], 00:11:16.105 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:16.105 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[41157], 00:11:16.105 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:16.105 | 99.99th=[41157] 00:11:16.105 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:16.105 slat (nsec): min=5504, max=49650, avg=12960.41, stdev=5844.77 00:11:16.105 clat (usec): min=165, max=508, avg=227.38, stdev=64.86 00:11:16.105 lat (usec): min=172, max=517, avg=240.34, stdev=64.87 00:11:16.105 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:11:16.105 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:11:16.105 | 70.00th=[ 215], 80.00th=[ 253], 90.00th=[ 347], 95.00th=[ 392], 00:11:16.105 | 99.00th=[ 416], 99.50th=[ 420], 99.90th=[ 510], 99.95th=[ 510], 00:11:16.105 | 99.99th=[ 510] 00:11:16.105 bw ( KiB/s): min= 4096, max= 4096, per=29.35%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.105 lat (usec) : 250=54.63%, 500=42.33%, 750=0.13%, 1000=0.13% 00:11:16.105 lat (msec) : 2=0.13%, 50=2.65% 00:11:16.105 cpu : usr=0.20%, sys=1.28%, ctx=756, majf=0, minf=1 00:11:16.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 issued rwts: total=244,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.105 job2: (groupid=0, jobs=1): err= 0: pid=3384943: Sat Nov 9 23:44:42 2024 00:11:16.105 read: IOPS=120, BW=480KiB/s (492kB/s)(500KiB/1041msec) 00:11:16.105 slat (nsec): min=4460, max=34585, avg=8586.18, stdev=6459.66 00:11:16.105 clat (usec): min=209, max=41044, avg=7114.21, stdev=15268.84 00:11:16.105 lat (usec): min=217, max=41062, avg=7122.80, stdev=15271.54 00:11:16.105 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 239], 00:11:16.105 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 269], 00:11:16.105 | 70.00th=[ 330], 80.00th=[ 416], 90.00th=[41157], 95.00th=[41157], 00:11:16.105 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:16.105 | 99.99th=[41157] 00:11:16.105 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:11:16.105 slat (nsec): min=7392, max=68925, avg=16289.88, stdev=10765.67 00:11:16.105 clat (usec): min=194, max=485, avg=271.67, stdev=67.15 00:11:16.105 lat (usec): min=202, max=554, avg=287.96, stdev=70.69 00:11:16.105 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:11:16.105 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 260], 00:11:16.105 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[ 412], 00:11:16.105 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 486], 99.95th=[ 486], 00:11:16.105 | 99.99th=[ 486] 00:11:16.105 bw ( KiB/s): min= 4096, max= 4096, per=29.35%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.105 lat (usec) : 250=52.90%, 500=43.64% 00:11:16.105 lat (msec) : 2=0.16%, 50=3.30% 00:11:16.105 cpu : usr=0.67%, sys=1.06%, ctx=637, majf=0, minf=1 00:11:16.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 issued rwts: total=125,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.105 job3: (groupid=0, jobs=1): err= 0: pid=3384944: Sat Nov 9 23:44:42 2024 00:11:16.105 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:16.105 slat (nsec): min=4116, max=49224, avg=10144.30, stdev=6979.55 00:11:16.105 clat (usec): min=208, max=41296, avg=352.42, stdev=1596.33 00:11:16.105 lat (usec): min=216, max=41304, avg=362.56, stdev=1596.52 00:11:16.105 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:11:16.105 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:11:16.105 | 70.00th=[ 302], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 424], 00:11:16.105 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[40633], 99.95th=[41157], 00:11:16.105 | 99.99th=[41157] 00:11:16.105 write: IOPS=1582, BW=6330KiB/s (6482kB/s)(6336KiB/1001msec); 0 zone resets 00:11:16.105 slat (nsec): min=5485, max=59011, avg=13385.81, stdev=7820.52 00:11:16.105 clat (usec): min=165, max=529, avg=259.77, stdev=78.60 00:11:16.105 lat (usec): min=172, max=572, avg=273.15, stdev=79.49 00:11:16.105 clat percentiles (usec): 00:11:16.105 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:16.105 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 241], 60.00th=[ 251], 00:11:16.105 | 70.00th=[ 302], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 400], 00:11:16.105 | 99.00th=[ 433], 99.50th=[ 437], 99.90th=[ 515], 99.95th=[ 529], 00:11:16.105 | 99.99th=[ 529] 00:11:16.105 bw ( KiB/s): min= 6312, max= 6312, per=45.23%, avg=6312.00, stdev= 0.00, samples=1 00:11:16.105 iops : min= 1578, max= 1578, avg=1578.00, stdev= 0.00, samples=1 00:11:16.105 lat (usec) : 250=55.38%, 500=43.43%, 750=1.06%, 1000=0.03% 00:11:16.105 lat (msec) : 50=0.10% 00:11:16.105 cpu : usr=2.40%, sys=3.60%, ctx=3120, majf=0, minf=1 00:11:16.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.105 issued rwts: total=1536,1584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.105 00:11:16.105 Run status group 0 (all jobs): 00:11:16.105 READ: bw=9345KiB/s (9569kB/s), 480KiB/s-6138KiB/s (492kB/s-6285kB/s), io=9728KiB (9961kB), run=1001-1041msec 00:11:16.105 WRITE: bw=13.6MiB/s (14.3MB/s), 1967KiB/s-6330KiB/s (2015kB/s-6482kB/s), io=14.2MiB (14.9MB), run=1001-1041msec 00:11:16.105 00:11:16.105 Disk stats (read/write): 00:11:16.105 nvme0n1: ios=572/1024, merge=0/0, ticks=583/229, in_queue=812, util=86.77% 00:11:16.105 nvme0n2: ios=281/512, merge=0/0, ticks=763/109, in_queue=872, util=87.80% 00:11:16.105 nvme0n3: ios=120/512, merge=0/0, ticks=685/132, in_queue=817, util=88.83% 00:11:16.105 nvme0n4: ios=1024/1520, merge=0/0, ticks=408/390, in_queue=798, util=89.68% 00:11:16.105 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:16.105 [global] 00:11:16.105 thread=1 00:11:16.105 invalidate=1 00:11:16.105 rw=write 00:11:16.105 time_based=1 00:11:16.105 runtime=1 00:11:16.105 ioengine=libaio 00:11:16.105 direct=1 00:11:16.105 bs=4096 00:11:16.105 iodepth=128 00:11:16.105 norandommap=0 00:11:16.105 numjobs=1 00:11:16.105 00:11:16.105 verify_dump=1 00:11:16.105 verify_backlog=512 00:11:16.105 verify_state_save=0 00:11:16.105 do_verify=1 00:11:16.105 verify=crc32c-intel 00:11:16.105 [job0] 00:11:16.105 filename=/dev/nvme0n1 00:11:16.105 [job1] 00:11:16.105 filename=/dev/nvme0n2 00:11:16.105 [job2] 00:11:16.105 filename=/dev/nvme0n3 00:11:16.105 [job3] 00:11:16.105 filename=/dev/nvme0n4 00:11:16.105 Could not set queue depth (nvme0n1) 00:11:16.105 Could not set queue depth (nvme0n2) 00:11:16.105 Could not set queue depth (nvme0n3) 00:11:16.105 Could not set queue depth (nvme0n4) 00:11:16.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.412 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.412 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.412 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.412 fio-3.35 00:11:16.412 Starting 4 threads 00:11:17.350 00:11:17.350 job0: (groupid=0, jobs=1): err= 0: pid=3385288: Sat Nov 9 23:44:43 2024 00:11:17.350 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1003msec) 00:11:17.350 slat (usec): min=2, max=11447, avg=120.16, stdev=598.08 00:11:17.350 clat (usec): min=529, max=35912, avg=15633.06, stdev=4546.88 00:11:17.350 lat (usec): min=3608, max=35916, avg=15753.22, stdev=4549.55 00:11:17.350 clat percentiles (usec): 00:11:17.350 | 1.00th=[ 7111], 5.00th=[11731], 10.00th=[12387], 20.00th=[13698], 00:11:17.350 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:11:17.350 | 70.00th=[15008], 80.00th=[16057], 90.00th=[22676], 95.00th=[25822], 00:11:17.350 | 99.00th=[31327], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:11:17.350 | 99.99th=[35914] 00:11:17.350 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:11:17.350 slat (usec): min=3, max=6861, avg=117.49, stdev=623.05 00:11:17.350 clat (usec): min=10006, max=32958, avg=15512.68, stdev=4324.42 00:11:17.350 lat (usec): min=10596, max=32985, avg=15630.17, stdev=4318.14 00:11:17.350 clat percentiles (usec): 00:11:17.350 | 1.00th=[10683], 5.00th=[11469], 10.00th=[12125], 20.00th=[13304], 00:11:17.350 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:11:17.350 | 70.00th=[14877], 80.00th=[15664], 90.00th=[23725], 95.00th=[26346], 00:11:17.350 | 99.00th=[30802], 99.50th=[31065], 99.90th=[32900], 99.95th=[32900], 00:11:17.350 | 99.99th=[32900] 00:11:17.350 bw ( KiB/s): min=16384, max=16384, per=31.44%, avg=16384.00, stdev= 0.00, samples=2 00:11:17.350 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:17.350 lat (usec) : 750=0.01% 00:11:17.350 lat (msec) : 4=0.38%, 10=0.41%, 20=86.65%, 50=12.55% 00:11:17.350 cpu : usr=5.39%, sys=5.79%, ctx=368, majf=0, minf=1 00:11:17.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.350 issued rwts: total=4048,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.350 job1: (groupid=0, jobs=1): err= 0: pid=3385289: Sat Nov 9 23:44:43 2024 00:11:17.350 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:11:17.350 slat (nsec): min=1883, max=11837k, avg=121415.48, stdev=803398.92 00:11:17.350 clat (usec): min=4793, max=28913, avg=14965.92, stdev=3362.90 00:11:17.350 lat (usec): min=4796, max=28920, avg=15087.34, stdev=3424.95 00:11:17.350 clat percentiles (usec): 00:11:17.350 | 1.00th=[ 7046], 5.00th=[10552], 10.00th=[12256], 20.00th=[12780], 00:11:17.350 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13829], 60.00th=[15664], 00:11:17.350 | 70.00th=[16581], 80.00th=[17171], 90.00th=[19006], 95.00th=[21103], 00:11:17.350 | 99.00th=[25297], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:11:17.350 | 99.99th=[28967] 00:11:17.350 write: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1013msec); 0 zone resets 00:11:17.350 slat (usec): min=2, max=11787, avg=100.41, stdev=558.88 00:11:17.350 clat (usec): min=803, max=43896, avg=15114.72, stdev=5411.82 00:11:17.350 lat (usec): min=1822, max=43903, avg=15215.13, stdev=5459.78 00:11:17.350 clat percentiles (usec): 00:11:17.350 | 1.00th=[ 4621], 5.00th=[ 7963], 10.00th=[10421], 20.00th=[12518], 00:11:17.350 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:11:17.350 | 70.00th=[15270], 80.00th=[17171], 90.00th=[19792], 95.00th=[26346], 00:11:17.350 | 99.00th=[35914], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:11:17.350 | 99.99th=[43779] 00:11:17.351 bw ( KiB/s): min=16737, max=17776, per=33.11%, avg=17256.50, stdev=734.68, samples=2 00:11:17.351 iops : min= 4184, max= 4444, avg=4314.00, stdev=183.85, samples=2 00:11:17.351 lat (usec) : 1000=0.01% 00:11:17.351 lat (msec) : 4=0.35%, 10=5.77%, 20=86.29%, 50=7.58% 00:11:17.351 cpu : usr=2.57%, sys=4.45%, ctx=460, majf=0, minf=2 00:11:17.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:17.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.351 issued rwts: total=4096,4438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.351 job2: (groupid=0, jobs=1): err= 0: pid=3385291: Sat Nov 9 23:44:43 2024 00:11:17.351 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:11:17.351 slat (usec): min=3, max=25247, avg=261.53, stdev=1559.24 00:11:17.351 clat (usec): min=5535, max=86287, avg=32412.95, stdev=13611.41 00:11:17.351 lat (usec): min=5541, max=86306, avg=32674.48, stdev=13743.69 00:11:17.351 clat percentiles (usec): 00:11:17.351 | 1.00th=[ 9765], 5.00th=[20579], 10.00th=[21890], 20.00th=[23987], 00:11:17.351 | 30.00th=[24511], 40.00th=[25035], 50.00th=[26084], 60.00th=[28967], 00:11:17.351 | 70.00th=[37487], 80.00th=[41681], 90.00th=[50594], 95.00th=[60031], 00:11:17.351 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[85459], 00:11:17.351 | 99.99th=[86508] 00:11:17.351 write: IOPS=2048, BW=8195KiB/s (8392kB/s)(8228KiB/1004msec); 0 zone resets 00:11:17.351 slat (usec): min=4, max=24974, avg=217.34, stdev=1438.29 00:11:17.351 clat (usec): min=1807, max=78422, avg=29215.95, stdev=11875.41 00:11:17.351 lat (usec): min=5402, max=78473, avg=29433.29, stdev=12006.72 00:11:17.351 clat percentiles (usec): 00:11:17.351 | 1.00th=[16319], 5.00th=[19792], 10.00th=[20055], 20.00th=[20317], 00:11:17.351 | 30.00th=[20579], 40.00th=[21103], 50.00th=[23725], 60.00th=[29492], 00:11:17.351 | 70.00th=[32113], 80.00th=[35914], 90.00th=[51119], 95.00th=[53740], 00:11:17.351 | 99.00th=[61604], 99.50th=[67634], 99.90th=[68682], 99.95th=[69731], 00:11:17.351 | 99.99th=[78119] 00:11:17.351 bw ( KiB/s): min= 7936, max= 8448, per=15.72%, avg=8192.00, stdev=362.04, samples=2 00:11:17.351 iops : min= 1984, max= 2112, avg=2048.00, stdev=90.51, samples=2 00:11:17.351 lat (msec) : 2=0.02%, 10=1.02%, 20=4.41%, 50=82.95%, 100=11.60% 00:11:17.351 cpu : usr=2.29%, sys=3.99%, ctx=135, majf=0, minf=1 00:11:17.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:17.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.351 issued rwts: total=2048,2057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.351 job3: (groupid=0, jobs=1): err= 0: pid=3385292: Sat Nov 9 23:44:43 2024 00:11:17.351 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:17.351 slat (usec): min=2, max=16439, avg=171.89, stdev=1102.52 00:11:17.351 clat (usec): min=8695, max=50252, avg=21199.78, stdev=6900.36 00:11:17.351 lat (usec): min=8699, max=50263, avg=21371.67, stdev=6986.86 00:11:17.351 clat percentiles (usec): 00:11:17.351 | 1.00th=[11863], 5.00th=[13042], 10.00th=[14091], 20.00th=[14615], 00:11:17.351 | 30.00th=[15270], 40.00th=[18220], 50.00th=[20055], 60.00th=[22414], 00:11:17.351 | 70.00th=[24773], 80.00th=[26870], 90.00th=[30016], 95.00th=[34866], 00:11:17.351 | 99.00th=[39060], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:11:17.351 | 99.99th=[50070] 00:11:17.351 write: IOPS=2605, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:11:17.351 slat (usec): min=3, max=11612, avg=208.27, stdev=980.73 00:11:17.351 clat (usec): min=546, max=55797, avg=27538.20, stdev=14508.20 00:11:17.351 lat (usec): min=5767, max=56825, avg=27746.47, stdev=14606.19 00:11:17.351 clat percentiles (usec): 00:11:17.351 | 1.00th=[ 5997], 5.00th=[14091], 10.00th=[14222], 20.00th=[14615], 00:11:17.351 | 30.00th=[15795], 40.00th=[17695], 50.00th=[19530], 60.00th=[25035], 00:11:17.351 | 70.00th=[39584], 80.00th=[43254], 90.00th=[50070], 95.00th=[53740], 00:11:17.351 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:11:17.351 | 99.99th=[55837] 00:11:17.351 bw ( KiB/s): min= 8504, max=11976, per=19.65%, avg=10240.00, stdev=2455.07, samples=2 00:11:17.351 iops : min= 2126, max= 2994, avg=2560.00, stdev=613.77, samples=2 00:11:17.351 lat (usec) : 750=0.02% 00:11:17.351 lat (msec) : 10=1.45%, 20=48.43%, 50=45.24%, 100=4.86% 00:11:17.351 cpu : usr=1.80%, sys=3.50%, ctx=294, majf=0, minf=1 00:11:17.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:17.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.351 issued rwts: total=2560,2608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.351 00:11:17.351 Run status group 0 (all jobs): 00:11:17.351 READ: bw=49.2MiB/s (51.6MB/s), 8159KiB/s-15.8MiB/s (8355kB/s-16.6MB/s), io=49.8MiB (52.2MB), run=1001-1013msec 00:11:17.351 WRITE: bw=50.9MiB/s (53.4MB/s), 8195KiB/s-17.1MiB/s (8392kB/s-17.9MB/s), io=51.6MiB (54.1MB), run=1001-1013msec 00:11:17.351 00:11:17.351 Disk stats (read/write): 00:11:17.351 nvme0n1: ios=3381/3584, merge=0/0, ticks=12796/13151, in_queue=25947, util=86.97% 00:11:17.351 nvme0n2: ios=3599/3763, merge=0/0, ticks=36093/34365, in_queue=70458, util=86.90% 00:11:17.351 nvme0n3: ios=1575/2048, merge=0/0, ticks=15972/18187, in_queue=34159, util=96.98% 00:11:17.351 nvme0n4: ios=1897/2048, merge=0/0, ticks=21483/30540, in_queue=52023, util=89.81% 00:11:17.351 23:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:17.351 [global] 00:11:17.351 thread=1 00:11:17.351 invalidate=1 00:11:17.351 rw=randwrite 00:11:17.351 time_based=1 00:11:17.351 runtime=1 00:11:17.351 ioengine=libaio 00:11:17.351 direct=1 00:11:17.351 bs=4096 00:11:17.351 iodepth=128 00:11:17.351 norandommap=0 00:11:17.351 numjobs=1 00:11:17.351 00:11:17.351 verify_dump=1 00:11:17.351 verify_backlog=512 00:11:17.351 verify_state_save=0 00:11:17.351 do_verify=1 00:11:17.351 verify=crc32c-intel 00:11:17.351 [job0] 00:11:17.351 filename=/dev/nvme0n1 00:11:17.351 [job1] 00:11:17.351 filename=/dev/nvme0n2 00:11:17.351 [job2] 00:11:17.351 filename=/dev/nvme0n3 00:11:17.351 [job3] 00:11:17.351 filename=/dev/nvme0n4 00:11:17.351 Could not set queue depth (nvme0n1) 00:11:17.351 Could not set queue depth (nvme0n2) 00:11:17.351 Could not set queue depth (nvme0n3) 00:11:17.351 Could not set queue depth (nvme0n4) 00:11:17.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.610 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.610 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.610 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:17.610 fio-3.35 00:11:17.610 Starting 4 threads 00:11:18.985 00:11:18.985 job0: (groupid=0, jobs=1): err= 0: pid=3385530: Sat Nov 9 23:44:44 2024 00:11:18.985 read: IOPS=3099, BW=12.1MiB/s (12.7MB/s)(12.3MiB/1012msec) 00:11:18.985 slat (usec): min=2, max=15728, avg=137.27, stdev=935.34 00:11:18.985 clat (usec): min=3008, max=41722, avg=18296.80, stdev=5445.42 00:11:18.985 lat (usec): min=7292, max=41736, avg=18434.07, stdev=5518.84 00:11:18.985 clat percentiles (usec): 00:11:18.985 | 1.00th=[ 7308], 5.00th=[10814], 10.00th=[13173], 20.00th=[13698], 00:11:18.985 | 30.00th=[15270], 40.00th=[16319], 50.00th=[17433], 60.00th=[17695], 00:11:18.985 | 70.00th=[20317], 80.00th=[23200], 90.00th=[25560], 95.00th=[30802], 00:11:18.985 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[38536], 00:11:18.985 | 99.99th=[41681] 00:11:18.985 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:11:18.985 slat (usec): min=3, max=20186, avg=141.94, stdev=1017.31 00:11:18.985 clat (usec): min=256, max=95674, avg=19818.74, stdev=15489.27 00:11:18.985 lat (usec): min=1131, max=95679, avg=19960.68, stdev=15592.61 00:11:18.985 clat percentiles (usec): 00:11:18.985 | 1.00th=[ 2089], 5.00th=[ 5145], 10.00th=[ 9372], 20.00th=[11338], 00:11:18.985 | 30.00th=[13042], 40.00th=[14091], 50.00th=[16188], 60.00th=[17171], 00:11:18.985 | 70.00th=[20055], 80.00th=[24773], 90.00th=[31589], 95.00th=[49021], 00:11:18.985 | 99.00th=[93848], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:11:18.985 | 99.99th=[95945] 00:11:18.985 bw ( KiB/s): min=13880, max=14288, per=26.04%, avg=14084.00, stdev=288.50, samples=2 00:11:18.985 iops : min= 3470, max= 3572, avg=3521.00, stdev=72.12, samples=2 00:11:18.985 lat (usec) : 500=0.01% 00:11:18.985 lat (msec) : 2=0.25%, 4=1.18%, 10=9.43%, 20=58.62%, 50=28.02% 00:11:18.985 lat (msec) : 100=2.48% 00:11:18.985 cpu : usr=2.08%, sys=4.75%, ctx=230, majf=0, minf=1 00:11:18.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:18.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.985 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.985 job1: (groupid=0, jobs=1): err= 0: pid=3385531: Sat Nov 9 23:44:44 2024 00:11:18.985 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:11:18.985 slat (usec): min=2, max=17109, avg=122.84, stdev=922.68 00:11:18.985 clat (usec): min=3228, max=61145, avg=15189.05, stdev=6744.28 00:11:18.985 lat (usec): min=3234, max=61153, avg=15311.89, stdev=6802.70 00:11:18.985 clat percentiles (usec): 00:11:18.985 | 1.00th=[ 6652], 5.00th=[10814], 10.00th=[11731], 20.00th=[12387], 00:11:18.985 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:11:18.985 | 70.00th=[14484], 80.00th=[17171], 90.00th=[20317], 95.00th=[23987], 00:11:18.985 | 99.00th=[49021], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:11:18.985 | 99.99th=[61080] 00:11:18.985 write: IOPS=4357, BW=17.0MiB/s (17.8MB/s)(17.2MiB/1013msec); 0 zone resets 00:11:18.985 slat (usec): min=3, max=12079, avg=98.06, stdev=539.94 00:11:18.985 clat (usec): min=750, max=55336, avg=15006.45, stdev=8967.85 00:11:18.985 lat (usec): min=760, max=55344, avg=15104.51, stdev=9031.38 00:11:18.985 clat percentiles (usec): 00:11:18.986 | 1.00th=[ 2024], 5.00th=[ 4555], 10.00th=[ 7373], 20.00th=[10814], 00:11:18.986 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:11:18.986 | 70.00th=[14353], 80.00th=[14877], 90.00th=[26346], 95.00th=[36963], 00:11:18.986 | 99.00th=[49021], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:11:18.986 | 99.99th=[55313] 00:11:18.986 bw ( KiB/s): min=16704, max=17592, per=31.71%, avg=17148.00, stdev=627.91, samples=2 00:11:18.986 iops : min= 4176, max= 4398, avg=4287.00, stdev=156.98, samples=2 00:11:18.986 lat (usec) : 1000=0.13% 00:11:18.986 lat (msec) : 2=0.39%, 4=1.80%, 10=8.39%, 20=76.42%, 50=12.02% 00:11:18.986 lat (msec) : 100=0.86% 00:11:18.986 cpu : usr=3.16%, sys=6.42%, ctx=442, majf=0, minf=1 00:11:18.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:18.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.986 issued rwts: total=4096,4414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.986 job2: (groupid=0, jobs=1): err= 0: pid=3385532: Sat Nov 9 23:44:44 2024 00:11:18.986 read: IOPS=3779, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1003msec) 00:11:18.986 slat (usec): min=3, max=7078, avg=121.01, stdev=657.11 00:11:18.986 clat (usec): min=2198, max=28808, avg=15792.26, stdev=2653.37 00:11:18.986 lat (usec): min=2212, max=28824, avg=15913.27, stdev=2703.81 00:11:18.986 clat percentiles (usec): 00:11:18.986 | 1.00th=[ 8455], 5.00th=[12387], 10.00th=[13304], 20.00th=[13960], 00:11:18.986 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:11:18.986 | 70.00th=[16450], 80.00th=[16909], 90.00th=[19006], 95.00th=[21627], 00:11:18.986 | 99.00th=[22152], 99.50th=[25560], 99.90th=[25560], 99.95th=[25822], 00:11:18.986 | 99.99th=[28705] 00:11:18.986 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:11:18.986 slat (usec): min=4, max=14388, avg=122.26, stdev=752.74 00:11:18.986 clat (usec): min=9440, max=48371, avg=16295.99, stdev=4488.93 00:11:18.986 lat (usec): min=9451, max=48391, avg=16418.25, stdev=4552.39 00:11:18.986 clat percentiles (usec): 00:11:18.986 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12911], 20.00th=[13829], 00:11:18.986 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], 00:11:18.986 | 70.00th=[16581], 80.00th=[17433], 90.00th=[18220], 95.00th=[23725], 00:11:18.986 | 99.00th=[36963], 99.50th=[40633], 99.90th=[40633], 99.95th=[41681], 00:11:18.986 | 99.99th=[48497] 00:11:18.986 bw ( KiB/s): min=16384, max=16384, per=30.30%, avg=16384.00, stdev= 0.00, samples=2 00:11:18.986 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:18.986 lat (msec) : 4=0.19%, 10=0.67%, 20=91.40%, 50=7.73% 00:11:18.986 cpu : usr=4.89%, sys=8.28%, ctx=257, majf=0, minf=1 00:11:18.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:18.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.986 issued rwts: total=3791,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.986 job3: (groupid=0, jobs=1): err= 0: pid=3385533: Sat Nov 9 23:44:44 2024 00:11:18.986 read: IOPS=1759, BW=7036KiB/s (7205kB/s)(7360KiB/1046msec) 00:11:18.986 slat (usec): min=2, max=44896, avg=282.62, stdev=1990.48 00:11:18.986 clat (msec): min=13, max=108, avg=35.50, stdev=19.29 00:11:18.986 lat (msec): min=13, max=108, avg=35.78, stdev=19.41 00:11:18.986 clat percentiles (msec): 00:11:18.986 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 21], 00:11:18.986 | 30.00th=[ 24], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 31], 00:11:18.986 | 70.00th=[ 45], 80.00th=[ 58], 90.00th=[ 67], 95.00th=[ 68], 00:11:18.986 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 109], 00:11:18.986 | 99.99th=[ 109] 00:11:18.986 write: IOPS=1957, BW=7832KiB/s (8020kB/s)(8192KiB/1046msec); 0 zone resets 00:11:18.986 slat (usec): min=3, max=18126, avg=228.06, stdev=1147.80 00:11:18.986 clat (msec): min=9, max=100, avg=31.79, stdev=19.77 00:11:18.986 lat (msec): min=9, max=100, avg=32.01, stdev=19.86 00:11:18.986 clat percentiles (msec): 00:11:18.986 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 18], 00:11:18.986 | 30.00th=[ 18], 40.00th=[ 23], 50.00th=[ 26], 60.00th=[ 31], 00:11:18.986 | 70.00th=[ 33], 80.00th=[ 45], 90.00th=[ 63], 95.00th=[ 80], 00:11:18.986 | 99.00th=[ 94], 99.50th=[ 95], 99.90th=[ 96], 99.95th=[ 96], 00:11:18.986 | 99.99th=[ 101] 00:11:18.986 bw ( KiB/s): min= 8192, max= 8192, per=15.15%, avg=8192.00, stdev= 0.00, samples=2 00:11:18.986 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:18.986 lat (msec) : 10=0.54%, 20=27.78%, 50=52.21%, 100=18.88%, 250=0.59% 00:11:18.986 cpu : usr=2.11%, sys=3.25%, ctx=218, majf=0, minf=1 00:11:18.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:18.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.986 issued rwts: total=1840,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.986 00:11:18.986 Run status group 0 (all jobs): 00:11:18.986 READ: bw=48.0MiB/s (50.4MB/s), 7036KiB/s-15.8MiB/s (7205kB/s-16.6MB/s), io=50.2MiB (52.7MB), run=1003-1046msec 00:11:18.986 WRITE: bw=52.8MiB/s (55.4MB/s), 7832KiB/s-17.0MiB/s (8020kB/s-17.8MB/s), io=55.2MiB (57.9MB), run=1003-1046msec 00:11:18.986 00:11:18.986 Disk stats (read/write): 00:11:18.986 nvme0n1: ios=2610/2623, merge=0/0, ticks=30009/30085, in_queue=60094, util=87.78% 00:11:18.986 nvme0n2: ios=3444/3584, merge=0/0, ticks=43325/49055, in_queue=92380, util=98.98% 00:11:18.986 nvme0n3: ios=3289/3584, merge=0/0, ticks=16574/17536, in_queue=34110, util=91.79% 00:11:18.986 nvme0n4: ios=1416/1536, merge=0/0, ticks=15873/13947, in_queue=29820, util=96.13% 00:11:18.986 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:18.986 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3385671 00:11:18.986 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:18.986 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:18.986 [global] 00:11:18.986 thread=1 00:11:18.986 invalidate=1 00:11:18.986 rw=read 00:11:18.986 time_based=1 00:11:18.986 runtime=10 00:11:18.986 ioengine=libaio 00:11:18.986 direct=1 00:11:18.986 bs=4096 00:11:18.986 iodepth=1 00:11:18.986 norandommap=1 00:11:18.986 numjobs=1 00:11:18.986 00:11:18.986 [job0] 00:11:18.986 filename=/dev/nvme0n1 00:11:18.986 [job1] 00:11:18.986 filename=/dev/nvme0n2 00:11:18.986 [job2] 00:11:18.986 filename=/dev/nvme0n3 00:11:18.986 [job3] 00:11:18.986 filename=/dev/nvme0n4 00:11:18.986 Could not set queue depth (nvme0n1) 00:11:18.986 Could not set queue depth (nvme0n2) 00:11:18.986 Could not set queue depth (nvme0n3) 00:11:18.986 Could not set queue depth (nvme0n4) 00:11:19.244 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.244 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.244 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.244 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.245 fio-3.35 00:11:19.245 Starting 4 threads 00:11:22.526 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:22.526 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=20566016, buflen=4096 00:11:22.526 fio: pid=3385762, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.526 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:22.526 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.526 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:22.526 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=327680, buflen=4096 00:11:22.526 fio: pid=3385761, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.784 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.784 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:22.784 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16699392, buflen=4096 00:11:22.784 fio: pid=3385759, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.043 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=40017920, buflen=4096 00:11:23.043 fio: pid=3385760, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.301 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.301 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:23.301 00:11:23.301 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385759: Sat Nov 9 23:44:49 2024 00:11:23.301 read: IOPS=1151, BW=4603KiB/s (4713kB/s)(15.9MiB/3543msec) 00:11:23.301 slat (usec): min=5, max=12917, avg=18.71, stdev=254.73 00:11:23.301 clat (usec): min=221, max=63065, avg=840.75, stdev=4710.44 00:11:23.301 lat (usec): min=227, max=63078, avg=859.47, stdev=4765.62 00:11:23.301 clat percentiles (usec): 00:11:23.301 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 273], 00:11:23.301 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:11:23.301 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 347], 00:11:23.301 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:23.301 | 99.99th=[63177] 00:11:23.301 bw ( KiB/s): min= 96, max=13360, per=27.52%, avg=5420.00, stdev=6319.09, samples=6 00:11:23.301 iops : min= 24, max= 3340, avg=1355.00, stdev=1579.77, samples=6 00:11:23.301 lat (usec) : 250=5.69%, 500=92.57%, 750=0.12%, 1000=0.22% 00:11:23.301 lat (msec) : 2=0.02%, 4=0.02%, 50=1.30%, 100=0.02% 00:11:23.301 cpu : usr=1.41%, sys=1.98%, ctx=4080, majf=0, minf=1 00:11:23.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.301 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.301 issued rwts: total=4078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.301 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385760: Sat Nov 9 23:44:49 2024 00:11:23.301 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(38.2MiB/3848msec) 00:11:23.301 slat (usec): min=5, max=27218, avg=16.66, stdev=307.80 00:11:23.301 clat (usec): min=223, max=41153, avg=371.02, stdev=1884.38 00:11:23.301 lat (usec): min=229, max=47929, avg=387.68, stdev=1924.62 00:11:23.301 clat percentiles (usec): 00:11:23.301 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:11:23.301 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:11:23.301 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:11:23.301 | 99.00th=[ 375], 99.50th=[ 523], 99.90th=[41157], 99.95th=[41157], 00:11:23.301 | 99.99th=[41157] 00:11:23.301 bw ( KiB/s): min= 126, max=14288, per=56.63%, avg=11154.00, stdev=4934.51, samples=7 00:11:23.301 iops : min= 31, max= 3572, avg=2788.43, stdev=1233.81, samples=7 00:11:23.301 lat (usec) : 250=9.56%, 500=89.89%, 750=0.28%, 1000=0.03% 00:11:23.301 lat (msec) : 2=0.01%, 10=0.01%, 50=0.21% 00:11:23.301 cpu : usr=2.18%, sys=4.65%, ctx=9777, majf=0, minf=2 00:11:23.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.301 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.301 issued rwts: total=9771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.301 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385761: Sat Nov 9 23:44:49 2024 00:11:23.301 read: IOPS=25, BW=99.3KiB/s (102kB/s)(320KiB/3222msec) 00:11:23.301 slat (nsec): min=12927, max=46147, avg=24348.96, stdev=10065.69 00:11:23.301 clat (usec): min=372, max=41323, avg=39952.65, stdev=6327.41 00:11:23.301 lat (usec): min=408, max=41341, avg=39976.86, stdev=6325.16 00:11:23.301 clat percentiles (usec): 00:11:23.301 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:23.301 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:23.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:23.301 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:23.301 | 99.99th=[41157] 00:11:23.301 bw ( KiB/s): min= 96, max= 104, per=0.51%, avg=100.00, stdev= 4.38, samples=6 00:11:23.301 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:11:23.301 lat (usec) : 500=1.23% 00:11:23.301 lat (msec) : 2=1.23%, 50=96.30% 00:11:23.301 cpu : usr=0.12%, sys=0.00%, ctx=81, majf=0, minf=1 00:11:23.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.301 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.301 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.301 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385762: Sat Nov 9 23:44:49 2024 00:11:23.301 read: IOPS=1726, BW=6906KiB/s (7072kB/s)(19.6MiB/2908msec) 00:11:23.301 slat (nsec): min=5437, max=56074, avg=12536.22, stdev=6050.53 00:11:23.301 clat (usec): min=229, max=41033, avg=561.50, stdev=3080.43 00:11:23.301 lat (usec): min=236, max=41061, avg=574.04, stdev=3081.51 00:11:23.301 clat percentiles (usec): 00:11:23.301 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 269], 00:11:23.301 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 347], 00:11:23.301 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 375], 00:11:23.301 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:23.302 | 99.99th=[41157] 00:11:23.302 bw ( KiB/s): min= 96, max=11160, per=31.04%, avg=6113.60, stdev=5588.07, samples=5 00:11:23.302 iops : min= 24, max= 2790, avg=1528.40, stdev=1397.02, samples=5 00:11:23.302 lat (usec) : 250=5.10%, 500=92.93%, 750=1.33%, 1000=0.02% 00:11:23.302 lat (msec) : 4=0.02%, 50=0.58% 00:11:23.302 cpu : usr=1.27%, sys=3.51%, ctx=5022, majf=0, minf=1 00:11:23.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.302 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.302 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.302 00:11:23.302 Run status group 0 (all jobs): 00:11:23.302 READ: bw=19.2MiB/s (20.2MB/s), 99.3KiB/s-9.92MiB/s (102kB/s-10.4MB/s), io=74.0MiB (77.6MB), run=2908-3848msec 00:11:23.302 00:11:23.302 Disk stats (read/write): 00:11:23.302 nvme0n1: ios=4073/0, merge=0/0, ticks=3214/0, in_queue=3214, util=95.42% 00:11:23.302 nvme0n2: ios=9763/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.74% 00:11:23.302 nvme0n3: ios=77/0, merge=0/0, ticks=3076/0, in_queue=3076, util=96.82% 00:11:23.302 nvme0n4: ios=4875/0, merge=0/0, ticks=2698/0, in_queue=2698, util=96.75% 00:11:23.566 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.566 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:23.826 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.826 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:24.084 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.084 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:24.651 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.651 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:24.909 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:24.910 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3385671 00:11:24.910 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:24.910 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:25.844 nvmf hotplug test: fio failed as expected 00:11:25.844 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.102 rmmod nvme_tcp 00:11:26.102 rmmod nvme_fabrics 00:11:26.102 rmmod nvme_keyring 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3383500 ']' 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3383500 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3383500 ']' 00:11:26.102 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3383500 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3383500 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3383500' 00:11:26.103 killing process with pid 3383500 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3383500 00:11:26.103 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3383500 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.477 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.385 00:11:29.385 real 0m27.224s 00:11:29.385 user 1m35.311s 00:11:29.385 sys 0m7.123s 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.385 ************************************ 00:11:29.385 END TEST nvmf_fio_target 00:11:29.385 ************************************ 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.385 ************************************ 00:11:29.385 START TEST nvmf_bdevio 00:11:29.385 ************************************ 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.385 * Looking for test storage... 00:11:29.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.385 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.386 --rc genhtml_branch_coverage=1 00:11:29.386 --rc genhtml_function_coverage=1 00:11:29.386 --rc genhtml_legend=1 00:11:29.386 --rc geninfo_all_blocks=1 00:11:29.386 --rc geninfo_unexecuted_blocks=1 00:11:29.386 00:11:29.386 ' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.386 --rc genhtml_branch_coverage=1 00:11:29.386 --rc genhtml_function_coverage=1 00:11:29.386 --rc genhtml_legend=1 00:11:29.386 --rc geninfo_all_blocks=1 00:11:29.386 --rc geninfo_unexecuted_blocks=1 00:11:29.386 00:11:29.386 ' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.386 --rc genhtml_branch_coverage=1 00:11:29.386 --rc genhtml_function_coverage=1 00:11:29.386 --rc genhtml_legend=1 00:11:29.386 --rc geninfo_all_blocks=1 00:11:29.386 --rc geninfo_unexecuted_blocks=1 00:11:29.386 00:11:29.386 ' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.386 --rc genhtml_branch_coverage=1 00:11:29.386 --rc genhtml_function_coverage=1 00:11:29.386 --rc genhtml_legend=1 00:11:29.386 --rc geninfo_all_blocks=1 00:11:29.386 --rc geninfo_unexecuted_blocks=1 00:11:29.386 00:11:29.386 ' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.386 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.646 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.646 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.646 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.646 23:44:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:31.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.547 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:31.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:31.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:31.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.548 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:11:31.807 00:11:31.807 --- 10.0.0.2 ping statistics --- 00:11:31.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.807 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:11:31.807 00:11:31.807 --- 10.0.0.1 ping statistics --- 00:11:31.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.807 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.807 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3388669 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3388669 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3388669 ']' 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:31.808 23:44:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.808 [2024-11-09 23:44:57.967330] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:11:31.808 [2024-11-09 23:44:57.967498] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.066 [2024-11-09 23:44:58.123271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.324 [2024-11-09 23:44:58.270030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.324 [2024-11-09 23:44:58.270109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.324 [2024-11-09 23:44:58.270136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.324 [2024-11-09 23:44:58.270161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.324 [2024-11-09 23:44:58.270182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.324 [2024-11-09 23:44:58.273130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:32.324 [2024-11-09 23:44:58.273188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:32.324 [2024-11-09 23:44:58.273239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.324 [2024-11-09 23:44:58.273246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.890 23:44:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 [2024-11-09 23:44:58.977344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 Malloc0 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.890 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.148 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.149 [2024-11-09 23:44:59.097212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:33.149 { 00:11:33.149 "params": { 00:11:33.149 "name": "Nvme$subsystem", 00:11:33.149 "trtype": "$TEST_TRANSPORT", 00:11:33.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:33.149 "adrfam": "ipv4", 00:11:33.149 "trsvcid": "$NVMF_PORT", 00:11:33.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:33.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:33.149 "hdgst": ${hdgst:-false}, 00:11:33.149 "ddgst": ${ddgst:-false} 00:11:33.149 }, 00:11:33.149 "method": "bdev_nvme_attach_controller" 00:11:33.149 } 00:11:33.149 EOF 00:11:33.149 )") 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:33.149 23:44:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:33.149 "params": { 00:11:33.149 "name": "Nvme1", 00:11:33.149 "trtype": "tcp", 00:11:33.149 "traddr": "10.0.0.2", 00:11:33.149 "adrfam": "ipv4", 00:11:33.149 "trsvcid": "4420", 00:11:33.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:33.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:33.149 "hdgst": false, 00:11:33.149 "ddgst": false 00:11:33.149 }, 00:11:33.149 "method": "bdev_nvme_attach_controller" 00:11:33.149 }' 00:11:33.149 [2024-11-09 23:44:59.179204] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:11:33.149 [2024-11-09 23:44:59.179340] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388825 ] 00:11:33.149 [2024-11-09 23:44:59.313834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.407 [2024-11-09 23:44:59.449292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.407 [2024-11-09 23:44:59.449338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.407 [2024-11-09 23:44:59.449343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.971 I/O targets: 00:11:33.971 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:33.971 00:11:33.971 00:11:33.971 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.971 http://cunit.sourceforge.net/ 00:11:33.971 00:11:33.971 00:11:33.971 Suite: bdevio tests on: Nvme1n1 00:11:33.971 Test: blockdev write read block ...passed 00:11:33.971 Test: blockdev write zeroes read block ...passed 00:11:33.971 Test: blockdev write zeroes read no split ...passed 00:11:34.229 Test: blockdev write zeroes read split ...passed 00:11:34.229 Test: blockdev write zeroes read split partial ...passed 00:11:34.229 Test: blockdev reset ...[2024-11-09 23:45:00.277489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:34.229 [2024-11-09 23:45:00.277696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:34.229 [2024-11-09 23:45:00.291694] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:34.229 passed 00:11:34.229 Test: blockdev write read 8 blocks ...passed 00:11:34.229 Test: blockdev write read size > 128k ...passed 00:11:34.229 Test: blockdev write read invalid size ...passed 00:11:34.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:34.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:34.229 Test: blockdev write read max offset ...passed 00:11:34.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:34.487 Test: blockdev writev readv 8 blocks ...passed 00:11:34.487 Test: blockdev writev readv 30 x 1block ...passed 00:11:34.487 Test: blockdev writev readv block ...passed 00:11:34.487 Test: blockdev writev readv size > 128k ...passed 00:11:34.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:34.487 Test: blockdev comparev and writev ...[2024-11-09 23:45:00.550826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.550895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.550934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.550960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.551435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.551471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.551507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.551532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.552014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.552049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.552083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.552108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.552574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.552617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.552652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:34.487 [2024-11-09 23:45:00.552678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:34.487 passed 00:11:34.487 Test: blockdev nvme passthru rw ...passed 00:11:34.487 Test: blockdev nvme passthru vendor specific ...[2024-11-09 23:45:00.636059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.487 [2024-11-09 23:45:00.636115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.636356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.487 [2024-11-09 23:45:00.636391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.636608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.487 [2024-11-09 23:45:00.636642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:34.487 [2024-11-09 23:45:00.636836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:34.487 [2024-11-09 23:45:00.636869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:34.487 passed 00:11:34.487 Test: blockdev nvme admin passthru ...passed 00:11:34.745 Test: blockdev copy ...passed 00:11:34.745 00:11:34.745 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.745 suites 1 1 n/a 0 0 00:11:34.745 tests 23 23 23 0 0 00:11:34.745 asserts 152 152 152 0 n/a 00:11:34.745 00:11:34.745 Elapsed time = 1.274 seconds 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.680 rmmod nvme_tcp 00:11:35.680 rmmod nvme_fabrics 00:11:35.680 rmmod nvme_keyring 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3388669 ']' 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3388669 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3388669 ']' 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3388669 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3388669 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3388669' 00:11:35.680 killing process with pid 3388669 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3388669 00:11:35.680 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3388669 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.057 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.962 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.962 00:11:38.962 real 0m9.546s 00:11:38.962 user 0m23.120s 00:11:38.962 sys 0m2.512s 00:11:38.962 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:38.962 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.962 ************************************ 00:11:38.962 END TEST nvmf_bdevio 00:11:38.962 ************************************ 00:11:38.962 23:45:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:38.962 00:11:38.962 real 4m30.001s 00:11:38.962 user 11m49.474s 00:11:38.962 sys 1m10.019s 00:11:38.962 23:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:38.962 23:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.962 ************************************ 00:11:38.962 END TEST nvmf_target_core 00:11:38.962 ************************************ 00:11:38.962 23:45:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:38.962 23:45:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:38.962 23:45:04 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:38.962 23:45:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.962 ************************************ 00:11:38.962 START TEST nvmf_target_extra 00:11:38.962 ************************************ 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:38.962 * Looking for test storage... 00:11:38.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.962 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.222 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:39.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.223 --rc genhtml_branch_coverage=1 00:11:39.223 --rc genhtml_function_coverage=1 00:11:39.223 --rc genhtml_legend=1 00:11:39.223 --rc geninfo_all_blocks=1 00:11:39.223 --rc geninfo_unexecuted_blocks=1 00:11:39.223 00:11:39.223 ' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:39.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.223 --rc genhtml_branch_coverage=1 00:11:39.223 --rc genhtml_function_coverage=1 00:11:39.223 --rc genhtml_legend=1 00:11:39.223 --rc geninfo_all_blocks=1 00:11:39.223 --rc geninfo_unexecuted_blocks=1 00:11:39.223 00:11:39.223 ' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:39.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.223 --rc genhtml_branch_coverage=1 00:11:39.223 --rc genhtml_function_coverage=1 00:11:39.223 --rc genhtml_legend=1 00:11:39.223 --rc geninfo_all_blocks=1 00:11:39.223 --rc geninfo_unexecuted_blocks=1 00:11:39.223 00:11:39.223 ' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:39.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.223 --rc genhtml_branch_coverage=1 00:11:39.223 --rc genhtml_function_coverage=1 00:11:39.223 --rc genhtml_legend=1 00:11:39.223 --rc geninfo_all_blocks=1 00:11:39.223 --rc geninfo_unexecuted_blocks=1 00:11:39.223 00:11:39.223 ' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.223 ************************************ 00:11:39.223 START TEST nvmf_example 00:11:39.223 ************************************ 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:39.223 * Looking for test storage... 00:11:39.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.223 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.224 --rc genhtml_branch_coverage=1 00:11:39.224 --rc genhtml_function_coverage=1 00:11:39.224 --rc genhtml_legend=1 00:11:39.224 --rc geninfo_all_blocks=1 00:11:39.224 --rc geninfo_unexecuted_blocks=1 00:11:39.224 00:11:39.224 ' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.224 --rc genhtml_branch_coverage=1 00:11:39.224 --rc genhtml_function_coverage=1 00:11:39.224 --rc genhtml_legend=1 00:11:39.224 --rc geninfo_all_blocks=1 00:11:39.224 --rc geninfo_unexecuted_blocks=1 00:11:39.224 00:11:39.224 ' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.224 --rc genhtml_branch_coverage=1 00:11:39.224 --rc genhtml_function_coverage=1 00:11:39.224 --rc genhtml_legend=1 00:11:39.224 --rc geninfo_all_blocks=1 00:11:39.224 --rc geninfo_unexecuted_blocks=1 00:11:39.224 00:11:39.224 ' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.224 --rc genhtml_branch_coverage=1 00:11:39.224 --rc genhtml_function_coverage=1 00:11:39.224 --rc genhtml_legend=1 00:11:39.224 --rc geninfo_all_blocks=1 00:11:39.224 --rc geninfo_unexecuted_blocks=1 00:11:39.224 00:11:39.224 ' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:39.224 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.225 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.758 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:11:41.759 00:11:41.759 --- 10.0.0.2 ping statistics --- 00:11:41.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.759 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:41.759 00:11:41.759 --- 10.0.0.1 ping statistics --- 00:11:41.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.759 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3391529 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3391529 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3391529 ']' 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.759 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.694 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:42.695 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:54.896 Initializing NVMe Controllers 00:11:54.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:54.896 Initialization complete. Launching workers. 00:11:54.896 ======================================================== 00:11:54.896 Latency(us) 00:11:54.896 Device Information : IOPS MiB/s Average min max 00:11:54.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12190.89 47.62 5249.23 1289.09 19045.31 00:11:54.896 ======================================================== 00:11:54.896 Total : 12190.89 47.62 5249.23 1289.09 19045.31 00:11:54.896 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.896 rmmod nvme_tcp 00:11:54.896 rmmod nvme_fabrics 00:11:54.896 rmmod nvme_keyring 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3391529 ']' 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3391529 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3391529 ']' 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3391529 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3391529 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3391529' 00:11:54.896 killing process with pid 3391529 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3391529 00:11:54.896 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3391529 00:11:54.896 nvmf threads initialize successfully 00:11:54.896 bdev subsystem init successfully 00:11:54.896 created a nvmf target service 00:11:54.896 create targets's poll groups done 00:11:54.896 all subsystems of target started 00:11:54.896 nvmf target is running 00:11:54.896 all subsystems of target stopped 00:11:54.896 destroy targets's poll groups done 00:11:54.896 destroyed the nvmf target service 00:11:54.896 bdev subsystem finish successfully 00:11:54.896 nvmf threads destroy successfully 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.896 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.272 00:11:56.272 real 0m17.217s 00:11:56.272 user 0m48.288s 00:11:56.272 sys 0m3.333s 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:56.272 ************************************ 00:11:56.272 END TEST nvmf_example 00:11:56.272 ************************************ 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.272 23:45:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.534 ************************************ 00:11:56.534 START TEST nvmf_filesystem 00:11:56.534 ************************************ 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:56.534 * Looking for test storage... 00:11:56.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.534 --rc genhtml_branch_coverage=1 00:11:56.534 --rc genhtml_function_coverage=1 00:11:56.534 --rc genhtml_legend=1 00:11:56.534 --rc geninfo_all_blocks=1 00:11:56.534 --rc geninfo_unexecuted_blocks=1 00:11:56.534 00:11:56.534 ' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.534 --rc genhtml_branch_coverage=1 00:11:56.534 --rc genhtml_function_coverage=1 00:11:56.534 --rc genhtml_legend=1 00:11:56.534 --rc geninfo_all_blocks=1 00:11:56.534 --rc geninfo_unexecuted_blocks=1 00:11:56.534 00:11:56.534 ' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.534 --rc genhtml_branch_coverage=1 00:11:56.534 --rc genhtml_function_coverage=1 00:11:56.534 --rc genhtml_legend=1 00:11:56.534 --rc geninfo_all_blocks=1 00:11:56.534 --rc geninfo_unexecuted_blocks=1 00:11:56.534 00:11:56.534 ' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.534 --rc genhtml_branch_coverage=1 00:11:56.534 --rc genhtml_function_coverage=1 00:11:56.534 --rc genhtml_legend=1 00:11:56.534 --rc geninfo_all_blocks=1 00:11:56.534 --rc geninfo_unexecuted_blocks=1 00:11:56.534 00:11:56.534 ' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:56.534 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:56.535 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:56.535 #define SPDK_CONFIG_H 00:11:56.535 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:56.535 #define SPDK_CONFIG_APPS 1 00:11:56.535 #define SPDK_CONFIG_ARCH native 00:11:56.535 #define SPDK_CONFIG_ASAN 1 00:11:56.535 #undef SPDK_CONFIG_AVAHI 00:11:56.535 #undef SPDK_CONFIG_CET 00:11:56.535 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:56.535 #define SPDK_CONFIG_COVERAGE 1 00:11:56.535 #define SPDK_CONFIG_CROSS_PREFIX 00:11:56.535 #undef SPDK_CONFIG_CRYPTO 00:11:56.535 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:56.535 #undef SPDK_CONFIG_CUSTOMOCF 00:11:56.536 #undef SPDK_CONFIG_DAOS 00:11:56.536 #define SPDK_CONFIG_DAOS_DIR 00:11:56.536 #define SPDK_CONFIG_DEBUG 1 00:11:56.536 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:56.536 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:56.536 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:56.536 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:56.536 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:56.536 #undef SPDK_CONFIG_DPDK_UADK 00:11:56.536 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:56.536 #define SPDK_CONFIG_EXAMPLES 1 00:11:56.536 #undef SPDK_CONFIG_FC 00:11:56.536 #define SPDK_CONFIG_FC_PATH 00:11:56.536 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:56.536 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:56.536 #define SPDK_CONFIG_FSDEV 1 00:11:56.536 #undef SPDK_CONFIG_FUSE 00:11:56.536 #undef SPDK_CONFIG_FUZZER 00:11:56.536 #define SPDK_CONFIG_FUZZER_LIB 00:11:56.536 #undef SPDK_CONFIG_GOLANG 00:11:56.536 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:56.536 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:56.536 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:56.536 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:56.536 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:56.536 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:56.536 #undef SPDK_CONFIG_HAVE_LZ4 00:11:56.536 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:56.536 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:56.536 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:56.536 #define SPDK_CONFIG_IDXD 1 00:11:56.536 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:56.536 #undef SPDK_CONFIG_IPSEC_MB 00:11:56.536 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:56.536 #define SPDK_CONFIG_ISAL 1 00:11:56.536 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:56.536 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:56.536 #define SPDK_CONFIG_LIBDIR 00:11:56.536 #undef SPDK_CONFIG_LTO 00:11:56.536 #define SPDK_CONFIG_MAX_LCORES 128 00:11:56.536 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:56.536 #define SPDK_CONFIG_NVME_CUSE 1 00:11:56.536 #undef SPDK_CONFIG_OCF 00:11:56.536 #define SPDK_CONFIG_OCF_PATH 00:11:56.536 #define SPDK_CONFIG_OPENSSL_PATH 00:11:56.536 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:56.536 #define SPDK_CONFIG_PGO_DIR 00:11:56.536 #undef SPDK_CONFIG_PGO_USE 00:11:56.536 #define SPDK_CONFIG_PREFIX /usr/local 00:11:56.536 #undef SPDK_CONFIG_RAID5F 00:11:56.536 #undef SPDK_CONFIG_RBD 00:11:56.536 #define SPDK_CONFIG_RDMA 1 00:11:56.536 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:56.536 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:56.536 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:56.536 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:56.536 #define SPDK_CONFIG_SHARED 1 00:11:56.536 #undef SPDK_CONFIG_SMA 00:11:56.536 #define SPDK_CONFIG_TESTS 1 00:11:56.536 #undef SPDK_CONFIG_TSAN 00:11:56.536 #define SPDK_CONFIG_UBLK 1 00:11:56.536 #define SPDK_CONFIG_UBSAN 1 00:11:56.536 #undef SPDK_CONFIG_UNIT_TESTS 00:11:56.536 #undef SPDK_CONFIG_URING 00:11:56.536 #define SPDK_CONFIG_URING_PATH 00:11:56.536 #undef SPDK_CONFIG_URING_ZNS 00:11:56.536 #undef SPDK_CONFIG_USDT 00:11:56.536 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:56.536 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:56.536 #undef SPDK_CONFIG_VFIO_USER 00:11:56.536 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:56.536 #define SPDK_CONFIG_VHOST 1 00:11:56.536 #define SPDK_CONFIG_VIRTIO 1 00:11:56.536 #undef SPDK_CONFIG_VTUNE 00:11:56.536 #define SPDK_CONFIG_VTUNE_DIR 00:11:56.536 #define SPDK_CONFIG_WERROR 1 00:11:56.536 #define SPDK_CONFIG_WPDK_DIR 00:11:56.536 #undef SPDK_CONFIG_XNVME 00:11:56.536 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:56.536 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:56.537 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:56.538 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3393798 ]] 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3393798 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.iKbcU7 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.iKbcU7/tests/target /tmp/spdk.iKbcU7 00:11:56.539 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=55100059648 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988532224 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6888472576 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993838080 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994268160 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=430080 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:56.799 * Looking for test storage... 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=55100059648 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9103065088 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.799 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.800 --rc genhtml_branch_coverage=1 00:11:56.800 --rc genhtml_function_coverage=1 00:11:56.800 --rc genhtml_legend=1 00:11:56.800 --rc geninfo_all_blocks=1 00:11:56.800 --rc geninfo_unexecuted_blocks=1 00:11:56.800 00:11:56.800 ' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.800 --rc genhtml_branch_coverage=1 00:11:56.800 --rc genhtml_function_coverage=1 00:11:56.800 --rc genhtml_legend=1 00:11:56.800 --rc geninfo_all_blocks=1 00:11:56.800 --rc geninfo_unexecuted_blocks=1 00:11:56.800 00:11:56.800 ' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.800 --rc genhtml_branch_coverage=1 00:11:56.800 --rc genhtml_function_coverage=1 00:11:56.800 --rc genhtml_legend=1 00:11:56.800 --rc geninfo_all_blocks=1 00:11:56.800 --rc geninfo_unexecuted_blocks=1 00:11:56.800 00:11:56.800 ' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.800 --rc genhtml_branch_coverage=1 00:11:56.800 --rc genhtml_function_coverage=1 00:11:56.800 --rc genhtml_legend=1 00:11:56.800 --rc geninfo_all_blocks=1 00:11:56.800 --rc geninfo_unexecuted_blocks=1 00:11:56.800 00:11:56.800 ' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.800 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.334 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.334 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.334 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.335 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:11:59.335 00:11:59.335 --- 10.0.0.2 ping statistics --- 00:11:59.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.335 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:59.335 00:11:59.335 --- 10.0.0.1 ping statistics --- 00:11:59.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.335 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.335 ************************************ 00:11:59.335 START TEST nvmf_filesystem_no_in_capsule 00:11:59.335 ************************************ 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3395448 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3395448 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3395448 ']' 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.335 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.335 [2024-11-09 23:45:25.221100] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:11:59.335 [2024-11-09 23:45:25.221234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.335 [2024-11-09 23:45:25.370170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.335 [2024-11-09 23:45:25.510147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.335 [2024-11-09 23:45:25.510226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.335 [2024-11-09 23:45:25.510252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.335 [2024-11-09 23:45:25.510275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.335 [2024-11-09 23:45:25.510294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.335 [2024-11-09 23:45:25.513169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.335 [2024-11-09 23:45:25.513223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.335 [2024-11-09 23:45:25.513310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.335 [2024-11-09 23:45:25.513318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.318 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.318 [2024-11-09 23:45:26.211644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.319 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.319 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.319 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.319 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.577 Malloc1 00:12:00.577 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.577 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.577 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.577 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.835 [2024-11-09 23:45:26.794459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.835 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:12:00.836 { 00:12:00.836 "name": "Malloc1", 00:12:00.836 "aliases": [ 00:12:00.836 "701f62ce-aedd-4dbd-a077-a03369751d4c" 00:12:00.836 ], 00:12:00.836 "product_name": "Malloc disk", 00:12:00.836 "block_size": 512, 00:12:00.836 "num_blocks": 1048576, 00:12:00.836 "uuid": "701f62ce-aedd-4dbd-a077-a03369751d4c", 00:12:00.836 "assigned_rate_limits": { 00:12:00.836 "rw_ios_per_sec": 0, 00:12:00.836 "rw_mbytes_per_sec": 0, 00:12:00.836 "r_mbytes_per_sec": 0, 00:12:00.836 "w_mbytes_per_sec": 0 00:12:00.836 }, 00:12:00.836 "claimed": true, 00:12:00.836 "claim_type": "exclusive_write", 00:12:00.836 "zoned": false, 00:12:00.836 "supported_io_types": { 00:12:00.836 "read": true, 00:12:00.836 "write": true, 00:12:00.836 "unmap": true, 00:12:00.836 "flush": true, 00:12:00.836 "reset": true, 00:12:00.836 "nvme_admin": false, 00:12:00.836 "nvme_io": false, 00:12:00.836 "nvme_io_md": false, 00:12:00.836 "write_zeroes": true, 00:12:00.836 "zcopy": true, 00:12:00.836 "get_zone_info": false, 00:12:00.836 "zone_management": false, 00:12:00.836 "zone_append": false, 00:12:00.836 "compare": false, 00:12:00.836 "compare_and_write": false, 00:12:00.836 "abort": true, 00:12:00.836 "seek_hole": false, 00:12:00.836 "seek_data": false, 00:12:00.836 "copy": true, 00:12:00.836 "nvme_iov_md": false 00:12:00.836 }, 00:12:00.836 "memory_domains": [ 00:12:00.836 { 00:12:00.836 "dma_device_id": "system", 00:12:00.836 "dma_device_type": 1 00:12:00.836 }, 00:12:00.836 { 00:12:00.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.836 "dma_device_type": 2 00:12:00.836 } 00:12:00.836 ], 00:12:00.836 "driver_specific": {} 00:12:00.836 } 00:12:00.836 ]' 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.836 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.402 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.402 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:12:01.402 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.402 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:01.402 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.929 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:04.495 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.427 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:05.427 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.427 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:05.427 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.428 ************************************ 00:12:05.428 START TEST filesystem_ext4 00:12:05.428 ************************************ 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:12:05.428 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.428 mke2fs 1.47.0 (5-Feb-2023) 00:12:05.686 Discarding device blocks: 0/522240 done 00:12:05.686 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.686 Filesystem UUID: 97743f5e-3cd2-49e1-a683-2cc9c44867bb 00:12:05.686 Superblock backups stored on blocks: 00:12:05.686 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.686 00:12:05.686 Allocating group tables: 0/64 done 00:12:05.686 Writing inode tables: 0/64 done 00:12:06.619 Creating journal (8192 blocks): done 00:12:08.375 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:12:08.375 00:12:08.375 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:12:08.375 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3395448 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.928 00:12:14.928 real 0m8.606s 00:12:14.928 user 0m0.021s 00:12:14.928 sys 0m0.066s 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:14.928 ************************************ 00:12:14.928 END TEST filesystem_ext4 00:12:14.928 ************************************ 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:14.928 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.929 ************************************ 00:12:14.929 START TEST filesystem_btrfs 00:12:14.929 ************************************ 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:14.929 btrfs-progs v6.8.1 00:12:14.929 See https://btrfs.readthedocs.io for more information. 00:12:14.929 00:12:14.929 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:14.929 NOTE: several default settings have changed in version 5.15, please make sure 00:12:14.929 this does not affect your deployments: 00:12:14.929 - DUP for metadata (-m dup) 00:12:14.929 - enabled no-holes (-O no-holes) 00:12:14.929 - enabled free-space-tree (-R free-space-tree) 00:12:14.929 00:12:14.929 Label: (null) 00:12:14.929 UUID: 005c94ad-4746-422d-aa68-ee169578a1c3 00:12:14.929 Node size: 16384 00:12:14.929 Sector size: 4096 (CPU page size: 4096) 00:12:14.929 Filesystem size: 510.00MiB 00:12:14.929 Block group profiles: 00:12:14.929 Data: single 8.00MiB 00:12:14.929 Metadata: DUP 32.00MiB 00:12:14.929 System: DUP 8.00MiB 00:12:14.929 SSD detected: yes 00:12:14.929 Zoned device: no 00:12:14.929 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:14.929 Checksum: crc32c 00:12:14.929 Number of devices: 1 00:12:14.929 Devices: 00:12:14.929 ID SIZE PATH 00:12:14.929 1 510.00MiB /dev/nvme0n1p1 00:12:14.929 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3395448 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.929 00:12:14.929 real 0m0.562s 00:12:14.929 user 0m0.011s 00:12:14.929 sys 0m0.110s 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.929 ************************************ 00:12:14.929 END TEST filesystem_btrfs 00:12:14.929 ************************************ 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.929 ************************************ 00:12:14.929 START TEST filesystem_xfs 00:12:14.929 ************************************ 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:14.929 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:14.929 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:14.929 = sectsz=512 attr=2, projid32bit=1 00:12:14.929 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:14.929 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:14.929 data = bsize=4096 blocks=130560, imaxpct=25 00:12:14.929 = sunit=0 swidth=0 blks 00:12:14.929 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:14.929 log =internal log bsize=4096 blocks=16384, version=2 00:12:14.929 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:14.929 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:15.495 Discarding blocks...Done. 00:12:15.495 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:15.495 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3395448 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:18.023 00:12:18.023 real 0m3.253s 00:12:18.023 user 0m0.014s 00:12:18.023 sys 0m0.060s 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:18.023 ************************************ 00:12:18.023 END TEST filesystem_xfs 00:12:18.023 ************************************ 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:18.023 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3395448 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3395448 ']' 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3395448 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3395448 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3395448' 00:12:18.281 killing process with pid 3395448 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3395448 00:12:18.281 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3395448 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:20.804 00:12:20.804 real 0m21.662s 00:12:20.804 user 1m22.279s 00:12:20.804 sys 0m2.575s 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.804 ************************************ 00:12:20.804 END TEST nvmf_filesystem_no_in_capsule 00:12:20.804 ************************************ 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.804 ************************************ 00:12:20.804 START TEST nvmf_filesystem_in_capsule 00:12:20.804 ************************************ 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3398210 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3398210 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3398210 ']' 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:20.804 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.804 [2024-11-09 23:45:46.933792] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:12:20.804 [2024-11-09 23:45:46.933950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.062 [2024-11-09 23:45:47.083324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.062 [2024-11-09 23:45:47.222166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.062 [2024-11-09 23:45:47.222246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.062 [2024-11-09 23:45:47.222272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.062 [2024-11-09 23:45:47.222296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.062 [2024-11-09 23:45:47.222317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.062 [2024-11-09 23:45:47.225354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.062 [2024-11-09 23:45:47.225436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.062 [2024-11-09 23:45:47.225516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.062 [2024-11-09 23:45:47.225523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.996 [2024-11-09 23:45:47.937973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.996 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.566 Malloc1 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.566 [2024-11-09 23:45:48.537481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.566 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:12:22.566 { 00:12:22.566 "name": "Malloc1", 00:12:22.566 "aliases": [ 00:12:22.566 "f10d7083-ae13-433c-b4e7-1a6b5bb7db6c" 00:12:22.566 ], 00:12:22.566 "product_name": "Malloc disk", 00:12:22.566 "block_size": 512, 00:12:22.566 "num_blocks": 1048576, 00:12:22.566 "uuid": "f10d7083-ae13-433c-b4e7-1a6b5bb7db6c", 00:12:22.566 "assigned_rate_limits": { 00:12:22.566 "rw_ios_per_sec": 0, 00:12:22.566 "rw_mbytes_per_sec": 0, 00:12:22.566 "r_mbytes_per_sec": 0, 00:12:22.566 "w_mbytes_per_sec": 0 00:12:22.566 }, 00:12:22.566 "claimed": true, 00:12:22.566 "claim_type": "exclusive_write", 00:12:22.566 "zoned": false, 00:12:22.566 "supported_io_types": { 00:12:22.566 "read": true, 00:12:22.566 "write": true, 00:12:22.566 "unmap": true, 00:12:22.566 "flush": true, 00:12:22.566 "reset": true, 00:12:22.566 "nvme_admin": false, 00:12:22.566 "nvme_io": false, 00:12:22.566 "nvme_io_md": false, 00:12:22.566 "write_zeroes": true, 00:12:22.566 "zcopy": true, 00:12:22.567 "get_zone_info": false, 00:12:22.567 "zone_management": false, 00:12:22.567 "zone_append": false, 00:12:22.567 "compare": false, 00:12:22.567 "compare_and_write": false, 00:12:22.567 "abort": true, 00:12:22.567 "seek_hole": false, 00:12:22.567 "seek_data": false, 00:12:22.567 "copy": true, 00:12:22.567 "nvme_iov_md": false 00:12:22.567 }, 00:12:22.567 "memory_domains": [ 00:12:22.567 { 00:12:22.567 "dma_device_id": "system", 00:12:22.567 "dma_device_type": 1 00:12:22.567 }, 00:12:22.567 { 00:12:22.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.567 "dma_device_type": 2 00:12:22.567 } 00:12:22.567 ], 00:12:22.567 "driver_specific": {} 00:12:22.567 } 00:12:22.567 ]' 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:22.567 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.500 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.500 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:12:23.500 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.500 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:23.500 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:25.398 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:25.656 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:26.589 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.524 ************************************ 00:12:27.524 START TEST filesystem_in_capsule_ext4 00:12:27.524 ************************************ 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:12:27.524 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:27.524 mke2fs 1.47.0 (5-Feb-2023) 00:12:27.524 Discarding device blocks: 0/522240 done 00:12:27.524 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:27.524 Filesystem UUID: 18cb0c81-928c-47f4-8616-fab8b66ae84b 00:12:27.524 Superblock backups stored on blocks: 00:12:27.524 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:27.524 00:12:27.524 Allocating group tables: 0/64 done 00:12:27.524 Writing inode tables: 0/64 done 00:12:27.782 Creating journal (8192 blocks): done 00:12:30.087 Writing superblocks and filesystem accounting information: 0/64 done 00:12:30.087 00:12:30.087 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:12:30.087 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:36.643 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3398210 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.643 00:12:36.643 real 0m8.518s 00:12:36.643 user 0m0.012s 00:12:36.643 sys 0m0.073s 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:36.643 ************************************ 00:12:36.643 END TEST filesystem_in_capsule_ext4 00:12:36.643 ************************************ 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.643 ************************************ 00:12:36.643 START TEST filesystem_in_capsule_btrfs 00:12:36.643 ************************************ 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:36.643 btrfs-progs v6.8.1 00:12:36.643 See https://btrfs.readthedocs.io for more information. 00:12:36.643 00:12:36.643 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:36.643 NOTE: several default settings have changed in version 5.15, please make sure 00:12:36.643 this does not affect your deployments: 00:12:36.643 - DUP for metadata (-m dup) 00:12:36.643 - enabled no-holes (-O no-holes) 00:12:36.643 - enabled free-space-tree (-R free-space-tree) 00:12:36.643 00:12:36.643 Label: (null) 00:12:36.643 UUID: a2ea31f2-9948-444a-9435-a256874b968b 00:12:36.643 Node size: 16384 00:12:36.643 Sector size: 4096 (CPU page size: 4096) 00:12:36.643 Filesystem size: 510.00MiB 00:12:36.643 Block group profiles: 00:12:36.643 Data: single 8.00MiB 00:12:36.643 Metadata: DUP 32.00MiB 00:12:36.643 System: DUP 8.00MiB 00:12:36.643 SSD detected: yes 00:12:36.643 Zoned device: no 00:12:36.643 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:36.643 Checksum: crc32c 00:12:36.643 Number of devices: 1 00:12:36.643 Devices: 00:12:36.643 ID SIZE PATH 00:12:36.643 1 510.00MiB /dev/nvme0n1p1 00:12:36.643 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3398210 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.643 00:12:36.643 real 0m0.555s 00:12:36.643 user 0m0.029s 00:12:36.643 sys 0m0.089s 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:36.643 ************************************ 00:12:36.643 END TEST filesystem_in_capsule_btrfs 00:12:36.643 ************************************ 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:36.643 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.644 ************************************ 00:12:36.644 START TEST filesystem_in_capsule_xfs 00:12:36.644 ************************************ 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:36.644 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:36.644 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:36.644 = sectsz=512 attr=2, projid32bit=1 00:12:36.644 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:36.644 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:36.644 data = bsize=4096 blocks=130560, imaxpct=25 00:12:36.644 = sunit=0 swidth=0 blks 00:12:36.644 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:36.644 log =internal log bsize=4096 blocks=16384, version=2 00:12:36.644 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:36.644 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:37.578 Discarding blocks...Done. 00:12:37.578 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:37.578 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3398210 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:39.477 00:12:39.477 real 0m2.717s 00:12:39.477 user 0m0.020s 00:12:39.477 sys 0m0.056s 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:39.477 ************************************ 00:12:39.477 END TEST filesystem_in_capsule_xfs 00:12:39.477 ************************************ 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3398210 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3398210 ']' 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3398210 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.477 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3398210 00:12:39.735 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:39.735 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:39.735 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3398210' 00:12:39.735 killing process with pid 3398210 00:12:39.735 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3398210 00:12:39.735 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3398210 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:42.265 00:12:42.265 real 0m21.257s 00:12:42.265 user 1m20.654s 00:12:42.265 sys 0m2.589s 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.265 ************************************ 00:12:42.265 END TEST nvmf_filesystem_in_capsule 00:12:42.265 ************************************ 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.265 rmmod nvme_tcp 00:12:42.265 rmmod nvme_fabrics 00:12:42.265 rmmod nvme_keyring 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.265 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.266 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.251 00:12:44.251 real 0m47.730s 00:12:44.251 user 2m44.072s 00:12:44.251 sys 0m6.846s 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:44.251 ************************************ 00:12:44.251 END TEST nvmf_filesystem 00:12:44.251 ************************************ 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.251 ************************************ 00:12:44.251 START TEST nvmf_target_discovery 00:12:44.251 ************************************ 00:12:44.251 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:44.251 * Looking for test storage... 00:12:44.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:44.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.252 --rc genhtml_branch_coverage=1 00:12:44.252 --rc genhtml_function_coverage=1 00:12:44.252 --rc genhtml_legend=1 00:12:44.252 --rc geninfo_all_blocks=1 00:12:44.252 --rc geninfo_unexecuted_blocks=1 00:12:44.252 00:12:44.252 ' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:44.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.252 --rc genhtml_branch_coverage=1 00:12:44.252 --rc genhtml_function_coverage=1 00:12:44.252 --rc genhtml_legend=1 00:12:44.252 --rc geninfo_all_blocks=1 00:12:44.252 --rc geninfo_unexecuted_blocks=1 00:12:44.252 00:12:44.252 ' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:44.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.252 --rc genhtml_branch_coverage=1 00:12:44.252 --rc genhtml_function_coverage=1 00:12:44.252 --rc genhtml_legend=1 00:12:44.252 --rc geninfo_all_blocks=1 00:12:44.252 --rc geninfo_unexecuted_blocks=1 00:12:44.252 00:12:44.252 ' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:44.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.252 --rc genhtml_branch_coverage=1 00:12:44.252 --rc genhtml_function_coverage=1 00:12:44.252 --rc genhtml_legend=1 00:12:44.252 --rc geninfo_all_blocks=1 00:12:44.252 --rc geninfo_unexecuted_blocks=1 00:12:44.252 00:12:44.252 ' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.252 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.253 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.788 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:46.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:46.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:46.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:46.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:12:46.789 00:12:46.789 --- 10.0.0.2 ping statistics --- 00:12:46.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.789 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:46.789 00:12:46.789 --- 10.0.0.1 ping statistics --- 00:12:46.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.789 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3402777 00:12:46.789 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3402777 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3402777 ']' 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:46.790 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.790 [2024-11-09 23:46:12.740631] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:12:46.790 [2024-11-09 23:46:12.740773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.790 [2024-11-09 23:46:12.893657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.048 [2024-11-09 23:46:13.035633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.048 [2024-11-09 23:46:13.035701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.048 [2024-11-09 23:46:13.035727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.048 [2024-11-09 23:46:13.035751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.048 [2024-11-09 23:46:13.035771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.048 [2024-11-09 23:46:13.038617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.048 [2024-11-09 23:46:13.038672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.048 [2024-11-09 23:46:13.038767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.048 [2024-11-09 23:46:13.038771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.614 [2024-11-09 23:46:13.789145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.614 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.872 Null1 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:47.872 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 [2024-11-09 23:46:13.837566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 Null2 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 Null3 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 Null4 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.873 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:48.131 00:12:48.131 Discovery Log Number of Records 6, Generation counter 6 00:12:48.132 =====Discovery Log Entry 0====== 00:12:48.132 trtype: tcp 00:12:48.132 adrfam: ipv4 00:12:48.132 subtype: current discovery subsystem 00:12:48.132 treq: not required 00:12:48.132 portid: 0 00:12:48.132 trsvcid: 4420 00:12:48.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:48.132 traddr: 10.0.0.2 00:12:48.132 eflags: explicit discovery connections, duplicate discovery information 00:12:48.132 sectype: none 00:12:48.132 =====Discovery Log Entry 1====== 00:12:48.132 trtype: tcp 00:12:48.132 adrfam: ipv4 00:12:48.132 subtype: nvme subsystem 00:12:48.132 treq: not required 00:12:48.132 portid: 0 00:12:48.132 trsvcid: 4420 00:12:48.132 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:48.132 traddr: 10.0.0.2 00:12:48.132 eflags: none 00:12:48.132 sectype: none 00:12:48.132 =====Discovery Log Entry 2====== 00:12:48.132 trtype: tcp 00:12:48.132 adrfam: ipv4 00:12:48.132 subtype: nvme subsystem 00:12:48.132 treq: not required 00:12:48.132 portid: 0 00:12:48.132 trsvcid: 4420 00:12:48.132 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:48.132 traddr: 10.0.0.2 00:12:48.132 eflags: none 00:12:48.132 sectype: none 00:12:48.132 =====Discovery Log Entry 3====== 00:12:48.132 trtype: tcp 00:12:48.132 adrfam: ipv4 00:12:48.132 subtype: nvme subsystem 00:12:48.132 treq: not required 00:12:48.132 portid: 0 00:12:48.132 trsvcid: 4420 00:12:48.132 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:48.132 traddr: 10.0.0.2 00:12:48.132 eflags: none 00:12:48.132 sectype: none 00:12:48.132 =====Discovery Log Entry 4====== 00:12:48.132 trtype: tcp 00:12:48.132 adrfam: ipv4 00:12:48.132 subtype: nvme subsystem 00:12:48.132 treq: not required 00:12:48.132 portid: 0 00:12:48.132 trsvcid: 4420 00:12:48.132 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:48.132 traddr: 10.0.0.2 00:12:48.132 eflags: none 00:12:48.132 sectype: none 00:12:48.132 =====Discovery Log Entry 5====== 00:12:48.132 trtype: tcp 00:12:48.132 adrfam: ipv4 00:12:48.132 subtype: discovery subsystem referral 00:12:48.132 treq: not required 00:12:48.132 portid: 0 00:12:48.132 trsvcid: 4430 00:12:48.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:48.132 traddr: 10.0.0.2 00:12:48.132 eflags: none 00:12:48.132 sectype: none 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:48.132 Perform nvmf subsystem discovery via RPC 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.132 [ 00:12:48.132 { 00:12:48.132 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:48.132 "subtype": "Discovery", 00:12:48.132 "listen_addresses": [ 00:12:48.132 { 00:12:48.132 "trtype": "TCP", 00:12:48.132 "adrfam": "IPv4", 00:12:48.132 "traddr": "10.0.0.2", 00:12:48.132 "trsvcid": "4420" 00:12:48.132 } 00:12:48.132 ], 00:12:48.132 "allow_any_host": true, 00:12:48.132 "hosts": [] 00:12:48.132 }, 00:12:48.132 { 00:12:48.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.132 "subtype": "NVMe", 00:12:48.132 "listen_addresses": [ 00:12:48.132 { 00:12:48.132 "trtype": "TCP", 00:12:48.132 "adrfam": "IPv4", 00:12:48.132 "traddr": "10.0.0.2", 00:12:48.132 "trsvcid": "4420" 00:12:48.132 } 00:12:48.132 ], 00:12:48.132 "allow_any_host": true, 00:12:48.132 "hosts": [], 00:12:48.132 "serial_number": "SPDK00000000000001", 00:12:48.132 "model_number": "SPDK bdev Controller", 00:12:48.132 "max_namespaces": 32, 00:12:48.132 "min_cntlid": 1, 00:12:48.132 "max_cntlid": 65519, 00:12:48.132 "namespaces": [ 00:12:48.132 { 00:12:48.132 "nsid": 1, 00:12:48.132 "bdev_name": "Null1", 00:12:48.132 "name": "Null1", 00:12:48.132 "nguid": "E89AEB5E49254462AB860B23928F427B", 00:12:48.132 "uuid": "e89aeb5e-4925-4462-ab86-0b23928f427b" 00:12:48.132 } 00:12:48.132 ] 00:12:48.132 }, 00:12:48.132 { 00:12:48.132 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:48.132 "subtype": "NVMe", 00:12:48.132 "listen_addresses": [ 00:12:48.132 { 00:12:48.132 "trtype": "TCP", 00:12:48.132 "adrfam": "IPv4", 00:12:48.132 "traddr": "10.0.0.2", 00:12:48.132 "trsvcid": "4420" 00:12:48.132 } 00:12:48.132 ], 00:12:48.132 "allow_any_host": true, 00:12:48.132 "hosts": [], 00:12:48.132 "serial_number": "SPDK00000000000002", 00:12:48.132 "model_number": "SPDK bdev Controller", 00:12:48.132 "max_namespaces": 32, 00:12:48.132 "min_cntlid": 1, 00:12:48.132 "max_cntlid": 65519, 00:12:48.132 "namespaces": [ 00:12:48.132 { 00:12:48.132 "nsid": 1, 00:12:48.132 "bdev_name": "Null2", 00:12:48.132 "name": "Null2", 00:12:48.132 "nguid": "05662DBC01A54AD586EBC79416D26438", 00:12:48.132 "uuid": "05662dbc-01a5-4ad5-86eb-c79416d26438" 00:12:48.132 } 00:12:48.132 ] 00:12:48.132 }, 00:12:48.132 { 00:12:48.132 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:48.132 "subtype": "NVMe", 00:12:48.132 "listen_addresses": [ 00:12:48.132 { 00:12:48.132 "trtype": "TCP", 00:12:48.132 "adrfam": "IPv4", 00:12:48.132 "traddr": "10.0.0.2", 00:12:48.132 "trsvcid": "4420" 00:12:48.132 } 00:12:48.132 ], 00:12:48.132 "allow_any_host": true, 00:12:48.132 "hosts": [], 00:12:48.132 "serial_number": "SPDK00000000000003", 00:12:48.132 "model_number": "SPDK bdev Controller", 00:12:48.132 "max_namespaces": 32, 00:12:48.132 "min_cntlid": 1, 00:12:48.132 "max_cntlid": 65519, 00:12:48.132 "namespaces": [ 00:12:48.132 { 00:12:48.132 "nsid": 1, 00:12:48.132 "bdev_name": "Null3", 00:12:48.132 "name": "Null3", 00:12:48.132 "nguid": "C6D14008B18049F680C4137076252D75", 00:12:48.132 "uuid": "c6d14008-b180-49f6-80c4-137076252d75" 00:12:48.132 } 00:12:48.132 ] 00:12:48.132 }, 00:12:48.132 { 00:12:48.132 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:48.132 "subtype": "NVMe", 00:12:48.132 "listen_addresses": [ 00:12:48.132 { 00:12:48.132 "trtype": "TCP", 00:12:48.132 "adrfam": "IPv4", 00:12:48.132 "traddr": "10.0.0.2", 00:12:48.132 "trsvcid": "4420" 00:12:48.132 } 00:12:48.132 ], 00:12:48.132 "allow_any_host": true, 00:12:48.132 "hosts": [], 00:12:48.132 "serial_number": "SPDK00000000000004", 00:12:48.132 "model_number": "SPDK bdev Controller", 00:12:48.132 "max_namespaces": 32, 00:12:48.132 "min_cntlid": 1, 00:12:48.132 "max_cntlid": 65519, 00:12:48.132 "namespaces": [ 00:12:48.132 { 00:12:48.132 "nsid": 1, 00:12:48.132 "bdev_name": "Null4", 00:12:48.132 "name": "Null4", 00:12:48.132 "nguid": "0800033FA2824DBE80D3C2783496CB22", 00:12:48.132 "uuid": "0800033f-a282-4dbe-80d3-c2783496cb22" 00:12:48.132 } 00:12:48.132 ] 00:12:48.132 } 00:12:48.132 ] 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:48.132 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.133 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.133 rmmod nvme_tcp 00:12:48.133 rmmod nvme_fabrics 00:12:48.133 rmmod nvme_keyring 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3402777 ']' 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3402777 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3402777 ']' 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3402777 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3402777 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3402777' 00:12:48.391 killing process with pid 3402777 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3402777 00:12:48.391 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3402777 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.326 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.862 00:12:51.862 real 0m7.264s 00:12:51.862 user 0m9.885s 00:12:51.862 sys 0m2.065s 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.862 ************************************ 00:12:51.862 END TEST nvmf_target_discovery 00:12:51.862 ************************************ 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.862 ************************************ 00:12:51.862 START TEST nvmf_referrals 00:12:51.862 ************************************ 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:51.862 * Looking for test storage... 00:12:51.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.862 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:51.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.862 --rc genhtml_branch_coverage=1 00:12:51.862 --rc genhtml_function_coverage=1 00:12:51.862 --rc genhtml_legend=1 00:12:51.863 --rc geninfo_all_blocks=1 00:12:51.863 --rc geninfo_unexecuted_blocks=1 00:12:51.863 00:12:51.863 ' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.863 --rc genhtml_branch_coverage=1 00:12:51.863 --rc genhtml_function_coverage=1 00:12:51.863 --rc genhtml_legend=1 00:12:51.863 --rc geninfo_all_blocks=1 00:12:51.863 --rc geninfo_unexecuted_blocks=1 00:12:51.863 00:12:51.863 ' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.863 --rc genhtml_branch_coverage=1 00:12:51.863 --rc genhtml_function_coverage=1 00:12:51.863 --rc genhtml_legend=1 00:12:51.863 --rc geninfo_all_blocks=1 00:12:51.863 --rc geninfo_unexecuted_blocks=1 00:12:51.863 00:12:51.863 ' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.863 --rc genhtml_branch_coverage=1 00:12:51.863 --rc genhtml_function_coverage=1 00:12:51.863 --rc genhtml_legend=1 00:12:51.863 --rc geninfo_all_blocks=1 00:12:51.863 --rc geninfo_unexecuted_blocks=1 00:12:51.863 00:12:51.863 ' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.863 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:53.766 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:53.766 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.766 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:53.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:53.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:12:53.767 00:12:53.767 --- 10.0.0.2 ping statistics --- 00:12:53.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.767 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:53.767 00:12:53.767 --- 10.0.0.1 ping statistics --- 00:12:53.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.767 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.767 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3405134 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3405134 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3405134 ']' 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.025 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.025 [2024-11-09 23:46:20.083212] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:12:54.025 [2024-11-09 23:46:20.083346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.284 [2024-11-09 23:46:20.236863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.284 [2024-11-09 23:46:20.381536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.284 [2024-11-09 23:46:20.381619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.284 [2024-11-09 23:46:20.381647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.284 [2024-11-09 23:46:20.381672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.284 [2024-11-09 23:46:20.381693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.284 [2024-11-09 23:46:20.384698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.284 [2024-11-09 23:46:20.384770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.284 [2024-11-09 23:46:20.384863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.284 [2024-11-09 23:46:20.384867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.217 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:55.217 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:55.217 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 [2024-11-09 23:46:21.113816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 [2024-11-09 23:46:21.140557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.218 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.476 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:55.734 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.735 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:55.993 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:55.993 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:55.993 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:55.993 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:55.993 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.993 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:56.250 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.251 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.251 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.251 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.251 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.509 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.767 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.026 rmmod nvme_tcp 00:12:57.026 rmmod nvme_fabrics 00:12:57.026 rmmod nvme_keyring 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3405134 ']' 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3405134 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3405134 ']' 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3405134 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.026 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3405134 00:12:57.284 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:57.284 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:57.284 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3405134' 00:12:57.284 killing process with pid 3405134 00:12:57.284 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3405134 00:12:57.284 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3405134 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.220 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.760 00:13:00.760 real 0m8.739s 00:13:00.760 user 0m16.347s 00:13:00.760 sys 0m2.539s 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.760 ************************************ 00:13:00.760 END TEST nvmf_referrals 00:13:00.760 ************************************ 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.760 ************************************ 00:13:00.760 START TEST nvmf_connect_disconnect 00:13:00.760 ************************************ 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:00.760 * Looking for test storage... 00:13:00.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.760 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.760 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.760 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.760 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.760 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.761 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.663 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:02.664 00:13:02.664 --- 10.0.0.2 ping statistics --- 00:13:02.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.664 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:02.664 00:13:02.664 --- 10.0.0.1 ping statistics --- 00:13:02.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.664 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3407690 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3407690 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3407690 ']' 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:02.664 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:02.923 [2024-11-09 23:46:28.945048] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:13:02.923 [2024-11-09 23:46:28.945214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.923 [2024-11-09 23:46:29.107810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.181 [2024-11-09 23:46:29.251620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.181 [2024-11-09 23:46:29.251700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.181 [2024-11-09 23:46:29.251726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.181 [2024-11-09 23:46:29.251750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.181 [2024-11-09 23:46:29.251770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.181 [2024-11-09 23:46:29.254640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.181 [2024-11-09 23:46:29.254701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.181 [2024-11-09 23:46:29.254754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.181 [2024-11-09 23:46:29.254761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.746 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:03.746 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:13:03.746 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.746 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.746 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.004 [2024-11-09 23:46:29.955383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.004 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.004 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.005 [2024-11-09 23:46:30.081375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:04.005 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:06.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:58.906 rmmod nvme_tcp 00:16:58.906 rmmod nvme_fabrics 00:16:58.906 rmmod nvme_keyring 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3407690 ']' 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3407690 00:16:58.906 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3407690 ']' 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3407690 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3407690 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3407690' 00:16:58.907 killing process with pid 3407690 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3407690 00:16:58.907 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3407690 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.281 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:00.282 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.282 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.282 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:02.186 00:17:02.186 real 4m1.926s 00:17:02.186 user 15m14.781s 00:17:02.186 sys 0m39.119s 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 ************************************ 00:17:02.186 END TEST nvmf_connect_disconnect 00:17:02.186 ************************************ 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 ************************************ 00:17:02.186 START TEST nvmf_multitarget 00:17:02.186 ************************************ 00:17:02.186 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:02.445 * Looking for test storage... 00:17:02.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.445 --rc genhtml_branch_coverage=1 00:17:02.445 --rc genhtml_function_coverage=1 00:17:02.445 --rc genhtml_legend=1 00:17:02.445 --rc geninfo_all_blocks=1 00:17:02.445 --rc geninfo_unexecuted_blocks=1 00:17:02.445 00:17:02.445 ' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.445 --rc genhtml_branch_coverage=1 00:17:02.445 --rc genhtml_function_coverage=1 00:17:02.445 --rc genhtml_legend=1 00:17:02.445 --rc geninfo_all_blocks=1 00:17:02.445 --rc geninfo_unexecuted_blocks=1 00:17:02.445 00:17:02.445 ' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.445 --rc genhtml_branch_coverage=1 00:17:02.445 --rc genhtml_function_coverage=1 00:17:02.445 --rc genhtml_legend=1 00:17:02.445 --rc geninfo_all_blocks=1 00:17:02.445 --rc geninfo_unexecuted_blocks=1 00:17:02.445 00:17:02.445 ' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.445 --rc genhtml_branch_coverage=1 00:17:02.445 --rc genhtml_function_coverage=1 00:17:02.445 --rc genhtml_legend=1 00:17:02.445 --rc geninfo_all_blocks=1 00:17:02.445 --rc geninfo_unexecuted_blocks=1 00:17:02.445 00:17:02.445 ' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:02.445 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:02.446 23:50:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:04.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.978 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:04.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:04.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:04.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:04.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:17:04.979 00:17:04.979 --- 10.0.0.2 ping statistics --- 00:17:04.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.979 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:17:04.979 00:17:04.979 --- 10.0.0.1 ping statistics --- 00:17:04.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.979 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3439318 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3439318 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3439318 ']' 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:04.979 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.979 [2024-11-09 23:50:30.986534] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:17:04.979 [2024-11-09 23:50:30.986721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.979 [2024-11-09 23:50:31.129815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.238 [2024-11-09 23:50:31.265357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.238 [2024-11-09 23:50:31.265431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.238 [2024-11-09 23:50:31.265457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.238 [2024-11-09 23:50:31.265480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.238 [2024-11-09 23:50:31.265500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.238 [2024-11-09 23:50:31.268254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.238 [2024-11-09 23:50:31.268324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.238 [2024-11-09 23:50:31.268409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.238 [2024-11-09 23:50:31.268431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:05.803 23:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:06.061 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:06.061 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:06.061 "nvmf_tgt_1" 00:17:06.061 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:06.319 "nvmf_tgt_2" 00:17:06.319 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:06.319 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:06.319 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:06.319 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:06.577 true 00:17:06.577 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:06.577 true 00:17:06.577 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:06.577 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:06.577 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.835 rmmod nvme_tcp 00:17:06.835 rmmod nvme_fabrics 00:17:06.835 rmmod nvme_keyring 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3439318 ']' 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3439318 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3439318 ']' 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3439318 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3439318 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3439318' 00:17:06.835 killing process with pid 3439318 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3439318 00:17:06.835 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3439318 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.210 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.116 00:17:10.116 real 0m7.653s 00:17:10.116 user 0m12.163s 00:17:10.116 sys 0m2.167s 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:10.116 ************************************ 00:17:10.116 END TEST nvmf_multitarget 00:17:10.116 ************************************ 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.116 ************************************ 00:17:10.116 START TEST nvmf_rpc 00:17:10.116 ************************************ 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:10.116 * Looking for test storage... 00:17:10.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.116 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:10.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.117 --rc genhtml_branch_coverage=1 00:17:10.117 --rc genhtml_function_coverage=1 00:17:10.117 --rc genhtml_legend=1 00:17:10.117 --rc geninfo_all_blocks=1 00:17:10.117 --rc geninfo_unexecuted_blocks=1 00:17:10.117 00:17:10.117 ' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:10.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.117 --rc genhtml_branch_coverage=1 00:17:10.117 --rc genhtml_function_coverage=1 00:17:10.117 --rc genhtml_legend=1 00:17:10.117 --rc geninfo_all_blocks=1 00:17:10.117 --rc geninfo_unexecuted_blocks=1 00:17:10.117 00:17:10.117 ' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:10.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.117 --rc genhtml_branch_coverage=1 00:17:10.117 --rc genhtml_function_coverage=1 00:17:10.117 --rc genhtml_legend=1 00:17:10.117 --rc geninfo_all_blocks=1 00:17:10.117 --rc geninfo_unexecuted_blocks=1 00:17:10.117 00:17:10.117 ' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:10.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.117 --rc genhtml_branch_coverage=1 00:17:10.117 --rc genhtml_function_coverage=1 00:17:10.117 --rc genhtml_legend=1 00:17:10.117 --rc geninfo_all_blocks=1 00:17:10.117 --rc geninfo_unexecuted_blocks=1 00:17:10.117 00:17:10.117 ' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.117 23:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:12.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:12.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:12.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:12.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.651 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:12.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:17:12.652 00:17:12.652 --- 10.0.0.2 ping statistics --- 00:17:12.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.652 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:17:12.652 00:17:12.652 --- 10.0.0.1 ping statistics --- 00:17:12.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.652 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3441619 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3441619 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3441619 ']' 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.652 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.652 [2024-11-09 23:50:38.531270] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:17:12.652 [2024-11-09 23:50:38.531410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.652 [2024-11-09 23:50:38.681705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.652 [2024-11-09 23:50:38.826247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.652 [2024-11-09 23:50:38.826323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.652 [2024-11-09 23:50:38.826348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.652 [2024-11-09 23:50:38.826371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.652 [2024-11-09 23:50:38.826390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.652 [2024-11-09 23:50:38.829139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.652 [2024-11-09 23:50:38.829208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.652 [2024-11-09 23:50:38.829258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.652 [2024-11-09 23:50:38.829264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:13.587 "tick_rate": 2700000000, 00:17:13.587 "poll_groups": [ 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_000", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [] 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_001", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [] 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_002", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [] 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_003", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [] 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 }' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.587 [2024-11-09 23:50:39.600318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:13.587 "tick_rate": 2700000000, 00:17:13.587 "poll_groups": [ 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_000", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [ 00:17:13.587 { 00:17:13.587 "trtype": "TCP" 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_001", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [ 00:17:13.587 { 00:17:13.587 "trtype": "TCP" 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_002", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [ 00:17:13.587 { 00:17:13.587 "trtype": "TCP" 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "nvmf_tgt_poll_group_003", 00:17:13.587 "admin_qpairs": 0, 00:17:13.587 "io_qpairs": 0, 00:17:13.587 "current_admin_qpairs": 0, 00:17:13.587 "current_io_qpairs": 0, 00:17:13.587 "pending_bdev_io": 0, 00:17:13.587 "completed_nvme_io": 0, 00:17:13.587 "transports": [ 00:17:13.587 { 00:17:13.587 "trtype": "TCP" 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 }' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:13.587 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.588 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 Malloc1 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 [2024-11-09 23:50:39.821388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:13.845 [2024-11-09 23:50:39.844716] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:13.845 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:13.845 could not add new controller: failed to write to nvme-fabrics device 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.845 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.846 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.846 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.846 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.846 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.846 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.846 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.411 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.411 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:14.411 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.411 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:14.411 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.939 [2024-11-09 23:50:42.747251] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:16.939 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:16.939 could not add new controller: failed to write to nvme-fabrics device 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.939 23:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.505 23:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.505 23:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:17.505 23:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.505 23:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:17.505 23:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:19.404 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.663 [2024-11-09 23:50:45.660514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.663 23:50:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.230 23:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.230 23:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:20.230 23:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.230 23:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:20.230 23:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:22.129 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.387 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:22.387 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:22.387 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:22.387 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.387 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.388 [2024-11-09 23:50:48.508877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.388 23:50:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.954 23:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.954 23:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:22.954 23:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.954 23:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:22.954 23:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.483 [2024-11-09 23:50:51.408812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.483 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.099 23:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.099 23:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:26.099 23:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.099 23:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:26.099 23:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:28.002 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.260 [2024-11-09 23:50:54.330083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.260 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.825 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.826 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:28.826 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.826 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:28.826 23:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:31.353 23:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 [2024-11-09 23:50:57.177884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:31.611 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:31.611 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:31.611 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:31.611 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:31.611 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:34.137 23:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 [2024-11-09 23:51:00.041134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 [2024-11-09 23:51:00.089154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.137 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 [2024-11-09 23:51:00.137348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 [2024-11-09 23:51:00.185494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 [2024-11-09 23:51:00.233661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.138 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:34.138 "tick_rate": 2700000000, 00:17:34.138 "poll_groups": [ 00:17:34.138 { 00:17:34.138 "name": "nvmf_tgt_poll_group_000", 00:17:34.138 "admin_qpairs": 2, 00:17:34.138 "io_qpairs": 84, 00:17:34.138 "current_admin_qpairs": 0, 00:17:34.138 "current_io_qpairs": 0, 00:17:34.138 "pending_bdev_io": 0, 00:17:34.138 "completed_nvme_io": 183, 00:17:34.138 "transports": [ 00:17:34.138 { 00:17:34.138 "trtype": "TCP" 00:17:34.138 } 00:17:34.138 ] 00:17:34.138 }, 00:17:34.138 { 00:17:34.138 "name": "nvmf_tgt_poll_group_001", 00:17:34.138 "admin_qpairs": 2, 00:17:34.138 "io_qpairs": 84, 00:17:34.138 "current_admin_qpairs": 0, 00:17:34.138 "current_io_qpairs": 0, 00:17:34.138 "pending_bdev_io": 0, 00:17:34.138 "completed_nvme_io": 178, 00:17:34.138 "transports": [ 00:17:34.138 { 00:17:34.138 "trtype": "TCP" 00:17:34.138 } 00:17:34.138 ] 00:17:34.138 }, 00:17:34.138 { 00:17:34.138 "name": "nvmf_tgt_poll_group_002", 00:17:34.138 "admin_qpairs": 1, 00:17:34.138 "io_qpairs": 84, 00:17:34.138 "current_admin_qpairs": 0, 00:17:34.138 "current_io_qpairs": 0, 00:17:34.138 "pending_bdev_io": 0, 00:17:34.138 "completed_nvme_io": 143, 00:17:34.138 "transports": [ 00:17:34.138 { 00:17:34.138 "trtype": "TCP" 00:17:34.138 } 00:17:34.138 ] 00:17:34.138 }, 00:17:34.138 { 00:17:34.138 "name": "nvmf_tgt_poll_group_003", 00:17:34.138 "admin_qpairs": 2, 00:17:34.138 "io_qpairs": 84, 00:17:34.138 "current_admin_qpairs": 0, 00:17:34.138 "current_io_qpairs": 0, 00:17:34.138 "pending_bdev_io": 0, 00:17:34.139 "completed_nvme_io": 182, 00:17:34.139 "transports": [ 00:17:34.139 { 00:17:34.139 "trtype": "TCP" 00:17:34.139 } 00:17:34.139 ] 00:17:34.139 } 00:17:34.139 ] 00:17:34.139 }' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:34.139 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.397 rmmod nvme_tcp 00:17:34.397 rmmod nvme_fabrics 00:17:34.397 rmmod nvme_keyring 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3441619 ']' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3441619 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3441619 ']' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3441619 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3441619 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3441619' 00:17:34.397 killing process with pid 3441619 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3441619 00:17:34.397 23:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3441619 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.771 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.673 00:17:37.673 real 0m27.739s 00:17:37.673 user 1m29.045s 00:17:37.673 sys 0m4.528s 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.673 ************************************ 00:17:37.673 END TEST nvmf_rpc 00:17:37.673 ************************************ 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.673 ************************************ 00:17:37.673 START TEST nvmf_invalid 00:17:37.673 ************************************ 00:17:37.673 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:37.932 * Looking for test storage... 00:17:37.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:37.932 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.933 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:37.933 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.933 --rc genhtml_branch_coverage=1 00:17:37.933 --rc genhtml_function_coverage=1 00:17:37.933 --rc genhtml_legend=1 00:17:37.933 --rc geninfo_all_blocks=1 00:17:37.933 --rc geninfo_unexecuted_blocks=1 00:17:37.933 00:17:37.933 ' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.933 --rc genhtml_branch_coverage=1 00:17:37.933 --rc genhtml_function_coverage=1 00:17:37.933 --rc genhtml_legend=1 00:17:37.933 --rc geninfo_all_blocks=1 00:17:37.933 --rc geninfo_unexecuted_blocks=1 00:17:37.933 00:17:37.933 ' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.933 --rc genhtml_branch_coverage=1 00:17:37.933 --rc genhtml_function_coverage=1 00:17:37.933 --rc genhtml_legend=1 00:17:37.933 --rc geninfo_all_blocks=1 00:17:37.933 --rc geninfo_unexecuted_blocks=1 00:17:37.933 00:17:37.933 ' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.933 --rc genhtml_branch_coverage=1 00:17:37.933 --rc genhtml_function_coverage=1 00:17:37.933 --rc genhtml_legend=1 00:17:37.933 --rc geninfo_all_blocks=1 00:17:37.933 --rc geninfo_unexecuted_blocks=1 00:17:37.933 00:17:37.933 ' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.933 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.465 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.465 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.466 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.466 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.466 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.466 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:40.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:17:40.466 00:17:40.466 --- 10.0.0.2 ping statistics --- 00:17:40.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.466 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:17:40.466 00:17:40.466 --- 10.0.0.1 ping statistics --- 00:17:40.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.466 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.466 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3446544 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3446544 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3446544 ']' 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.467 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:40.467 [2024-11-09 23:51:06.368473] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:17:40.467 [2024-11-09 23:51:06.368626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.467 [2024-11-09 23:51:06.514582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.467 [2024-11-09 23:51:06.653472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.467 [2024-11-09 23:51:06.653552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.467 [2024-11-09 23:51:06.653578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.467 [2024-11-09 23:51:06.653611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.467 [2024-11-09 23:51:06.653632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.467 [2024-11-09 23:51:06.656485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.467 [2024-11-09 23:51:06.656556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.467 [2024-11-09 23:51:06.656669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.467 [2024-11-09 23:51:06.656673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:41.401 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28028 00:17:41.659 [2024-11-09 23:51:07.644826] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:41.659 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:41.659 { 00:17:41.659 "nqn": "nqn.2016-06.io.spdk:cnode28028", 00:17:41.659 "tgt_name": "foobar", 00:17:41.659 "method": "nvmf_create_subsystem", 00:17:41.659 "req_id": 1 00:17:41.659 } 00:17:41.659 Got JSON-RPC error response 00:17:41.659 response: 00:17:41.659 { 00:17:41.659 "code": -32603, 00:17:41.659 "message": "Unable to find target foobar" 00:17:41.659 }' 00:17:41.659 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:41.659 { 00:17:41.659 "nqn": "nqn.2016-06.io.spdk:cnode28028", 00:17:41.659 "tgt_name": "foobar", 00:17:41.659 "method": "nvmf_create_subsystem", 00:17:41.659 "req_id": 1 00:17:41.659 } 00:17:41.659 Got JSON-RPC error response 00:17:41.659 response: 00:17:41.659 { 00:17:41.659 "code": -32603, 00:17:41.659 "message": "Unable to find target foobar" 00:17:41.659 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:41.659 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:41.659 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30650 00:17:41.918 [2024-11-09 23:51:07.937908] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30650: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:41.918 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:41.918 { 00:17:41.918 "nqn": "nqn.2016-06.io.spdk:cnode30650", 00:17:41.918 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:41.918 "method": "nvmf_create_subsystem", 00:17:41.918 "req_id": 1 00:17:41.918 } 00:17:41.918 Got JSON-RPC error response 00:17:41.918 response: 00:17:41.918 { 00:17:41.918 "code": -32602, 00:17:41.918 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:41.918 }' 00:17:41.918 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:41.918 { 00:17:41.918 "nqn": "nqn.2016-06.io.spdk:cnode30650", 00:17:41.918 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:41.918 "method": "nvmf_create_subsystem", 00:17:41.918 "req_id": 1 00:17:41.918 } 00:17:41.918 Got JSON-RPC error response 00:17:41.918 response: 00:17:41.918 { 00:17:41.918 "code": -32602, 00:17:41.918 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:41.918 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:41.918 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:41.918 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18672 00:17:42.177 [2024-11-09 23:51:08.234982] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18672: invalid model number 'SPDK_Controller' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:42.177 { 00:17:42.177 "nqn": "nqn.2016-06.io.spdk:cnode18672", 00:17:42.177 "model_number": "SPDK_Controller\u001f", 00:17:42.177 "method": "nvmf_create_subsystem", 00:17:42.177 "req_id": 1 00:17:42.177 } 00:17:42.177 Got JSON-RPC error response 00:17:42.177 response: 00:17:42.177 { 00:17:42.177 "code": -32602, 00:17:42.177 "message": "Invalid MN SPDK_Controller\u001f" 00:17:42.177 }' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:42.177 { 00:17:42.177 "nqn": "nqn.2016-06.io.spdk:cnode18672", 00:17:42.177 "model_number": "SPDK_Controller\u001f", 00:17:42.177 "method": "nvmf_create_subsystem", 00:17:42.177 "req_id": 1 00:17:42.177 } 00:17:42.177 Got JSON-RPC error response 00:17:42.177 response: 00:17:42.177 { 00:17:42.177 "code": -32602, 00:17:42.177 "message": "Invalid MN SPDK_Controller\u001f" 00:17:42.177 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:42.177 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'sT@*$3[Rjzyq%UQ!_>T9;' 00:17:42.178 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'sT@*$3[Rjzyq%UQ!_>T9;' nqn.2016-06.io.spdk:cnode27515 00:17:42.436 [2024-11-09 23:51:08.596259] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27515: invalid serial number 'sT@*$3[Rjzyq%UQ!_>T9;' 00:17:42.436 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:42.436 { 00:17:42.436 "nqn": "nqn.2016-06.io.spdk:cnode27515", 00:17:42.436 "serial_number": "sT@*$3[Rjzyq%UQ!_>T9;", 00:17:42.436 "method": "nvmf_create_subsystem", 00:17:42.436 "req_id": 1 00:17:42.436 } 00:17:42.436 Got JSON-RPC error response 00:17:42.436 response: 00:17:42.436 { 00:17:42.436 "code": -32602, 00:17:42.436 "message": "Invalid SN sT@*$3[Rjzyq%UQ!_>T9;" 00:17:42.436 }' 00:17:42.436 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:42.436 { 00:17:42.436 "nqn": "nqn.2016-06.io.spdk:cnode27515", 00:17:42.436 "serial_number": "sT@*$3[Rjzyq%UQ!_>T9;", 00:17:42.437 "method": "nvmf_create_subsystem", 00:17:42.437 "req_id": 1 00:17:42.437 } 00:17:42.437 Got JSON-RPC error response 00:17:42.437 response: 00:17:42.437 { 00:17:42.437 "code": -32602, 00:17:42.437 "message": "Invalid SN sT@*$3[Rjzyq%UQ!_>T9;" 00:17:42.437 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:42.437 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:42.696 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:42.697 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '@D.e~_ZD\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`' 00:17:42.698 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '@D.e~_ZD\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`' nqn.2016-06.io.spdk:cnode26630 00:17:42.956 [2024-11-09 23:51:08.993843] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26630: invalid model number '@D.e~_ZD\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`' 00:17:42.956 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:42.956 { 00:17:42.956 "nqn": "nqn.2016-06.io.spdk:cnode26630", 00:17:42.956 "model_number": "@D.e~_ZD\\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`", 00:17:42.956 "method": "nvmf_create_subsystem", 00:17:42.956 "req_id": 1 00:17:42.956 } 00:17:42.956 Got JSON-RPC error response 00:17:42.956 response: 00:17:42.956 { 00:17:42.956 "code": -32602, 00:17:42.956 "message": "Invalid MN @D.e~_ZD\\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`" 00:17:42.956 }' 00:17:42.956 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:42.956 { 00:17:42.956 "nqn": "nqn.2016-06.io.spdk:cnode26630", 00:17:42.956 "model_number": "@D.e~_ZD\\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`", 00:17:42.956 "method": "nvmf_create_subsystem", 00:17:42.956 "req_id": 1 00:17:42.956 } 00:17:42.956 Got JSON-RPC error response 00:17:42.956 response: 00:17:42.956 { 00:17:42.956 "code": -32602, 00:17:42.956 "message": "Invalid MN @D.e~_ZD\\jaQ&stz}GbtjLis-pGnyEey=.GbJnN!`" 00:17:42.956 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:42.956 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:43.214 [2024-11-09 23:51:09.270878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.214 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:43.472 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:43.472 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:43.472 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:43.472 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:43.472 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:43.729 [2024-11-09 23:51:09.826106] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:43.729 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:43.729 { 00:17:43.729 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:43.729 "listen_address": { 00:17:43.729 "trtype": "tcp", 00:17:43.729 "traddr": "", 00:17:43.729 "trsvcid": "4421" 00:17:43.729 }, 00:17:43.729 "method": "nvmf_subsystem_remove_listener", 00:17:43.729 "req_id": 1 00:17:43.729 } 00:17:43.729 Got JSON-RPC error response 00:17:43.729 response: 00:17:43.729 { 00:17:43.729 "code": -32602, 00:17:43.729 "message": "Invalid parameters" 00:17:43.729 }' 00:17:43.729 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:43.729 { 00:17:43.729 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:43.729 "listen_address": { 00:17:43.729 "trtype": "tcp", 00:17:43.729 "traddr": "", 00:17:43.729 "trsvcid": "4421" 00:17:43.729 }, 00:17:43.729 "method": "nvmf_subsystem_remove_listener", 00:17:43.729 "req_id": 1 00:17:43.729 } 00:17:43.729 Got JSON-RPC error response 00:17:43.729 response: 00:17:43.729 { 00:17:43.729 "code": -32602, 00:17:43.729 "message": "Invalid parameters" 00:17:43.729 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:43.729 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3601 -i 0 00:17:43.988 [2024-11-09 23:51:10.115123] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3601: invalid cntlid range [0-65519] 00:17:43.988 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:43.988 { 00:17:43.988 "nqn": "nqn.2016-06.io.spdk:cnode3601", 00:17:43.988 "min_cntlid": 0, 00:17:43.988 "method": "nvmf_create_subsystem", 00:17:43.988 "req_id": 1 00:17:43.988 } 00:17:43.988 Got JSON-RPC error response 00:17:43.988 response: 00:17:43.988 { 00:17:43.988 "code": -32602, 00:17:43.988 "message": "Invalid cntlid range [0-65519]" 00:17:43.988 }' 00:17:43.988 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:43.988 { 00:17:43.988 "nqn": "nqn.2016-06.io.spdk:cnode3601", 00:17:43.988 "min_cntlid": 0, 00:17:43.988 "method": "nvmf_create_subsystem", 00:17:43.988 "req_id": 1 00:17:43.988 } 00:17:43.988 Got JSON-RPC error response 00:17:43.988 response: 00:17:43.988 { 00:17:43.988 "code": -32602, 00:17:43.988 "message": "Invalid cntlid range [0-65519]" 00:17:43.988 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:43.988 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12464 -i 65520 00:17:44.246 [2024-11-09 23:51:10.404097] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12464: invalid cntlid range [65520-65519] 00:17:44.246 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:44.246 { 00:17:44.246 "nqn": "nqn.2016-06.io.spdk:cnode12464", 00:17:44.246 "min_cntlid": 65520, 00:17:44.246 "method": "nvmf_create_subsystem", 00:17:44.246 "req_id": 1 00:17:44.246 } 00:17:44.246 Got JSON-RPC error response 00:17:44.246 response: 00:17:44.246 { 00:17:44.246 "code": -32602, 00:17:44.246 "message": "Invalid cntlid range [65520-65519]" 00:17:44.246 }' 00:17:44.246 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:44.246 { 00:17:44.246 "nqn": "nqn.2016-06.io.spdk:cnode12464", 00:17:44.246 "min_cntlid": 65520, 00:17:44.246 "method": "nvmf_create_subsystem", 00:17:44.246 "req_id": 1 00:17:44.246 } 00:17:44.246 Got JSON-RPC error response 00:17:44.246 response: 00:17:44.246 { 00:17:44.246 "code": -32602, 00:17:44.246 "message": "Invalid cntlid range [65520-65519]" 00:17:44.246 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.246 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16884 -I 0 00:17:44.503 [2024-11-09 23:51:10.673048] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16884: invalid cntlid range [1-0] 00:17:44.504 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:44.504 { 00:17:44.504 "nqn": "nqn.2016-06.io.spdk:cnode16884", 00:17:44.504 "max_cntlid": 0, 00:17:44.504 "method": "nvmf_create_subsystem", 00:17:44.504 "req_id": 1 00:17:44.504 } 00:17:44.504 Got JSON-RPC error response 00:17:44.504 response: 00:17:44.504 { 00:17:44.504 "code": -32602, 00:17:44.504 "message": "Invalid cntlid range [1-0]" 00:17:44.504 }' 00:17:44.504 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:44.504 { 00:17:44.504 "nqn": "nqn.2016-06.io.spdk:cnode16884", 00:17:44.504 "max_cntlid": 0, 00:17:44.504 "method": "nvmf_create_subsystem", 00:17:44.504 "req_id": 1 00:17:44.504 } 00:17:44.504 Got JSON-RPC error response 00:17:44.504 response: 00:17:44.504 { 00:17:44.504 "code": -32602, 00:17:44.504 "message": "Invalid cntlid range [1-0]" 00:17:44.504 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:44.504 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6256 -I 65520 00:17:44.762 [2024-11-09 23:51:10.950038] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6256: invalid cntlid range [1-65520] 00:17:45.019 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:45.019 { 00:17:45.019 "nqn": "nqn.2016-06.io.spdk:cnode6256", 00:17:45.019 "max_cntlid": 65520, 00:17:45.019 "method": "nvmf_create_subsystem", 00:17:45.019 "req_id": 1 00:17:45.019 } 00:17:45.019 Got JSON-RPC error response 00:17:45.019 response: 00:17:45.019 { 00:17:45.019 "code": -32602, 00:17:45.019 "message": "Invalid cntlid range [1-65520]" 00:17:45.019 }' 00:17:45.019 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:45.019 { 00:17:45.019 "nqn": "nqn.2016-06.io.spdk:cnode6256", 00:17:45.019 "max_cntlid": 65520, 00:17:45.019 "method": "nvmf_create_subsystem", 00:17:45.019 "req_id": 1 00:17:45.019 } 00:17:45.019 Got JSON-RPC error response 00:17:45.019 response: 00:17:45.019 { 00:17:45.019 "code": -32602, 00:17:45.019 "message": "Invalid cntlid range [1-65520]" 00:17:45.019 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.019 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22152 -i 6 -I 5 00:17:45.277 [2024-11-09 23:51:11.222971] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22152: invalid cntlid range [6-5] 00:17:45.277 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:45.277 { 00:17:45.277 "nqn": "nqn.2016-06.io.spdk:cnode22152", 00:17:45.277 "min_cntlid": 6, 00:17:45.277 "max_cntlid": 5, 00:17:45.277 "method": "nvmf_create_subsystem", 00:17:45.277 "req_id": 1 00:17:45.277 } 00:17:45.277 Got JSON-RPC error response 00:17:45.277 response: 00:17:45.277 { 00:17:45.277 "code": -32602, 00:17:45.277 "message": "Invalid cntlid range [6-5]" 00:17:45.277 }' 00:17:45.277 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:45.277 { 00:17:45.277 "nqn": "nqn.2016-06.io.spdk:cnode22152", 00:17:45.277 "min_cntlid": 6, 00:17:45.277 "max_cntlid": 5, 00:17:45.277 "method": "nvmf_create_subsystem", 00:17:45.277 "req_id": 1 00:17:45.277 } 00:17:45.277 Got JSON-RPC error response 00:17:45.277 response: 00:17:45.277 { 00:17:45.277 "code": -32602, 00:17:45.277 "message": "Invalid cntlid range [6-5]" 00:17:45.278 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:45.278 { 00:17:45.278 "name": "foobar", 00:17:45.278 "method": "nvmf_delete_target", 00:17:45.278 "req_id": 1 00:17:45.278 } 00:17:45.278 Got JSON-RPC error response 00:17:45.278 response: 00:17:45.278 { 00:17:45.278 "code": -32602, 00:17:45.278 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:45.278 }' 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:45.278 { 00:17:45.278 "name": "foobar", 00:17:45.278 "method": "nvmf_delete_target", 00:17:45.278 "req_id": 1 00:17:45.278 } 00:17:45.278 Got JSON-RPC error response 00:17:45.278 response: 00:17:45.278 { 00:17:45.278 "code": -32602, 00:17:45.278 "message": "The specified target doesn't exist, cannot delete it." 00:17:45.278 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.278 rmmod nvme_tcp 00:17:45.278 rmmod nvme_fabrics 00:17:45.278 rmmod nvme_keyring 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3446544 ']' 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3446544 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3446544 ']' 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3446544 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:45.278 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3446544 00:17:45.536 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:45.536 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:45.536 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3446544' 00:17:45.536 killing process with pid 3446544 00:17:45.536 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3446544 00:17:45.536 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3446544 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.471 23:51:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.006 00:17:49.006 real 0m10.748s 00:17:49.006 user 0m27.145s 00:17:49.006 sys 0m2.741s 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:49.006 ************************************ 00:17:49.006 END TEST nvmf_invalid 00:17:49.006 ************************************ 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.006 ************************************ 00:17:49.006 START TEST nvmf_connect_stress 00:17:49.006 ************************************ 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:49.006 * Looking for test storage... 00:17:49.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.006 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.007 --rc genhtml_branch_coverage=1 00:17:49.007 --rc genhtml_function_coverage=1 00:17:49.007 --rc genhtml_legend=1 00:17:49.007 --rc geninfo_all_blocks=1 00:17:49.007 --rc geninfo_unexecuted_blocks=1 00:17:49.007 00:17:49.007 ' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.007 --rc genhtml_branch_coverage=1 00:17:49.007 --rc genhtml_function_coverage=1 00:17:49.007 --rc genhtml_legend=1 00:17:49.007 --rc geninfo_all_blocks=1 00:17:49.007 --rc geninfo_unexecuted_blocks=1 00:17:49.007 00:17:49.007 ' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.007 --rc genhtml_branch_coverage=1 00:17:49.007 --rc genhtml_function_coverage=1 00:17:49.007 --rc genhtml_legend=1 00:17:49.007 --rc geninfo_all_blocks=1 00:17:49.007 --rc geninfo_unexecuted_blocks=1 00:17:49.007 00:17:49.007 ' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:49.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.007 --rc genhtml_branch_coverage=1 00:17:49.007 --rc genhtml_function_coverage=1 00:17:49.007 --rc genhtml_legend=1 00:17:49.007 --rc geninfo_all_blocks=1 00:17:49.007 --rc geninfo_unexecuted_blocks=1 00:17:49.007 00:17:49.007 ' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.007 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:50.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:50.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:50.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:50.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.912 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.912 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.912 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.912 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:50.912 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.912 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.913 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.913 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.913 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:50.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:17:50.913 00:17:50.913 --- 10.0.0.2 ping statistics --- 00:17:50.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.913 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:17:50.913 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:17:51.267 00:17:51.267 --- 10.0.0.1 ping statistics --- 00:17:51.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.267 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.267 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3449831 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3449831 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3449831 ']' 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:51.268 23:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.268 [2024-11-09 23:51:17.240087] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:17:51.268 [2024-11-09 23:51:17.240259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.268 [2024-11-09 23:51:17.387206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.578 [2024-11-09 23:51:17.527648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.578 [2024-11-09 23:51:17.527728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.578 [2024-11-09 23:51:17.527753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.578 [2024-11-09 23:51:17.527777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.578 [2024-11-09 23:51:17.527796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.578 [2024-11-09 23:51:17.530480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.578 [2024-11-09 23:51:17.530574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.578 [2024-11-09 23:51:17.530604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.144 [2024-11-09 23:51:18.217035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.144 [2024-11-09 23:51:18.237269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.144 NULL1 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3449992 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:52.144 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.145 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.145 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.711 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.711 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:52.711 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.711 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.711 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.969 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.969 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:52.969 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.969 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.969 23:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.226 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.227 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:53.227 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.227 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.227 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.485 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.485 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:53.485 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.485 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.485 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.742 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.742 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:53.742 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.742 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.742 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.310 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.310 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:54.310 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.310 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.310 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.568 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.568 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:54.568 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.568 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.568 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.826 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.826 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:54.826 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.826 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.826 23:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.084 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.084 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:55.084 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.084 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.084 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.650 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.650 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:55.650 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.650 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.650 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.908 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.908 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:55.908 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.908 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.908 23:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.167 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.167 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:56.167 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.167 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.167 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.426 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.426 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:56.426 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.426 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.426 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.683 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.683 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:56.683 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.683 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.683 23:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.249 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.249 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:57.249 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.249 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.249 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.507 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.507 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:57.507 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.507 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.507 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.766 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.766 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:57.766 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.766 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.766 23:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.024 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.024 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:58.024 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.024 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.024 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.281 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.281 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:58.281 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.281 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.281 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.847 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.847 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:58.847 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.847 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.847 23:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.105 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.105 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:59.105 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.105 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.105 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.363 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.363 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:59.363 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.363 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.363 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.621 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.621 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:17:59.621 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.621 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.621 23:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.187 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.187 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:00.187 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.187 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.187 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.445 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.445 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:00.445 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.445 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.445 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.703 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.703 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:00.703 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.703 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.703 23:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.961 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.961 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:00.961 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.961 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.961 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.220 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.220 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:01.220 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.220 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.220 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.785 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.785 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:01.785 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.785 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.785 23:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.044 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.044 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:02.044 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.044 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.044 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.302 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.302 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:02.302 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.302 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.302 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.302 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3449992 00:18:02.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3449992) - No such process 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3449992 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.560 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.560 rmmod nvme_tcp 00:18:02.560 rmmod nvme_fabrics 00:18:02.560 rmmod nvme_keyring 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3449831 ']' 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3449831 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3449831 ']' 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3449831 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3449831 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3449831' 00:18:02.818 killing process with pid 3449831 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3449831 00:18:02.818 23:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3449831 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.753 23:51:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:06.289 00:18:06.289 real 0m17.275s 00:18:06.289 user 0m42.696s 00:18:06.289 sys 0m6.146s 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:06.289 ************************************ 00:18:06.289 END TEST nvmf_connect_stress 00:18:06.289 ************************************ 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.289 ************************************ 00:18:06.289 START TEST nvmf_fused_ordering 00:18:06.289 ************************************ 00:18:06.289 23:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:06.289 * Looking for test storage... 00:18:06.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.289 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:06.289 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:18:06.289 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:06.289 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:06.289 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.289 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:06.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.290 --rc genhtml_branch_coverage=1 00:18:06.290 --rc genhtml_function_coverage=1 00:18:06.290 --rc genhtml_legend=1 00:18:06.290 --rc geninfo_all_blocks=1 00:18:06.290 --rc geninfo_unexecuted_blocks=1 00:18:06.290 00:18:06.290 ' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:06.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.290 --rc genhtml_branch_coverage=1 00:18:06.290 --rc genhtml_function_coverage=1 00:18:06.290 --rc genhtml_legend=1 00:18:06.290 --rc geninfo_all_blocks=1 00:18:06.290 --rc geninfo_unexecuted_blocks=1 00:18:06.290 00:18:06.290 ' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:06.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.290 --rc genhtml_branch_coverage=1 00:18:06.290 --rc genhtml_function_coverage=1 00:18:06.290 --rc genhtml_legend=1 00:18:06.290 --rc geninfo_all_blocks=1 00:18:06.290 --rc geninfo_unexecuted_blocks=1 00:18:06.290 00:18:06.290 ' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:06.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.290 --rc genhtml_branch_coverage=1 00:18:06.290 --rc genhtml_function_coverage=1 00:18:06.290 --rc genhtml_legend=1 00:18:06.290 --rc geninfo_all_blocks=1 00:18:06.290 --rc geninfo_unexecuted_blocks=1 00:18:06.290 00:18:06.290 ' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.290 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.291 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.291 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.291 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:06.291 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:06.291 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.291 23:51:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:08.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:08.195 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.195 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:08.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:08.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:08.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:18:08.196 00:18:08.196 --- 10.0.0.2 ping statistics --- 00:18:08.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.196 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:18:08.196 00:18:08.196 --- 10.0.0.1 ping statistics --- 00:18:08.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.196 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3453301 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3453301 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3453301 ']' 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.196 23:51:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:08.196 [2024-11-09 23:51:34.389683] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:18:08.196 [2024-11-09 23:51:34.389827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.455 [2024-11-09 23:51:34.545150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.713 [2024-11-09 23:51:34.687822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.713 [2024-11-09 23:51:34.687902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.713 [2024-11-09 23:51:34.687928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.713 [2024-11-09 23:51:34.687953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.713 [2024-11-09 23:51:34.687973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.713 [2024-11-09 23:51:34.689617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 [2024-11-09 23:51:35.414855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 [2024-11-09 23:51:35.431094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 NULL1 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.280 23:51:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:09.539 [2024-11-09 23:51:35.506112] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:18:09.539 [2024-11-09 23:51:35.506198] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3453501 ] 00:18:10.105 Attached to nqn.2016-06.io.spdk:cnode1 00:18:10.105 Namespace ID: 1 size: 1GB 00:18:10.105 fused_ordering(0) 00:18:10.105 fused_ordering(1) 00:18:10.105 fused_ordering(2) 00:18:10.105 fused_ordering(3) 00:18:10.105 fused_ordering(4) 00:18:10.105 fused_ordering(5) 00:18:10.105 fused_ordering(6) 00:18:10.105 fused_ordering(7) 00:18:10.105 fused_ordering(8) 00:18:10.105 fused_ordering(9) 00:18:10.105 fused_ordering(10) 00:18:10.105 fused_ordering(11) 00:18:10.105 fused_ordering(12) 00:18:10.105 fused_ordering(13) 00:18:10.105 fused_ordering(14) 00:18:10.105 fused_ordering(15) 00:18:10.105 fused_ordering(16) 00:18:10.105 fused_ordering(17) 00:18:10.105 fused_ordering(18) 00:18:10.105 fused_ordering(19) 00:18:10.105 fused_ordering(20) 00:18:10.105 fused_ordering(21) 00:18:10.105 fused_ordering(22) 00:18:10.105 fused_ordering(23) 00:18:10.105 fused_ordering(24) 00:18:10.105 fused_ordering(25) 00:18:10.105 fused_ordering(26) 00:18:10.105 fused_ordering(27) 00:18:10.105 fused_ordering(28) 00:18:10.105 fused_ordering(29) 00:18:10.105 fused_ordering(30) 00:18:10.105 fused_ordering(31) 00:18:10.105 fused_ordering(32) 00:18:10.105 fused_ordering(33) 00:18:10.105 fused_ordering(34) 00:18:10.105 fused_ordering(35) 00:18:10.105 fused_ordering(36) 00:18:10.105 fused_ordering(37) 00:18:10.105 fused_ordering(38) 00:18:10.105 fused_ordering(39) 00:18:10.105 fused_ordering(40) 00:18:10.105 fused_ordering(41) 00:18:10.105 fused_ordering(42) 00:18:10.105 fused_ordering(43) 00:18:10.105 fused_ordering(44) 00:18:10.105 fused_ordering(45) 00:18:10.105 fused_ordering(46) 00:18:10.105 fused_ordering(47) 00:18:10.105 fused_ordering(48) 00:18:10.105 fused_ordering(49) 00:18:10.105 fused_ordering(50) 00:18:10.105 fused_ordering(51) 00:18:10.105 fused_ordering(52) 00:18:10.105 fused_ordering(53) 00:18:10.105 fused_ordering(54) 00:18:10.105 fused_ordering(55) 00:18:10.105 fused_ordering(56) 00:18:10.105 fused_ordering(57) 00:18:10.105 fused_ordering(58) 00:18:10.105 fused_ordering(59) 00:18:10.105 fused_ordering(60) 00:18:10.105 fused_ordering(61) 00:18:10.105 fused_ordering(62) 00:18:10.105 fused_ordering(63) 00:18:10.105 fused_ordering(64) 00:18:10.105 fused_ordering(65) 00:18:10.105 fused_ordering(66) 00:18:10.105 fused_ordering(67) 00:18:10.105 fused_ordering(68) 00:18:10.105 fused_ordering(69) 00:18:10.105 fused_ordering(70) 00:18:10.105 fused_ordering(71) 00:18:10.105 fused_ordering(72) 00:18:10.105 fused_ordering(73) 00:18:10.105 fused_ordering(74) 00:18:10.105 fused_ordering(75) 00:18:10.105 fused_ordering(76) 00:18:10.105 fused_ordering(77) 00:18:10.105 fused_ordering(78) 00:18:10.105 fused_ordering(79) 00:18:10.105 fused_ordering(80) 00:18:10.105 fused_ordering(81) 00:18:10.105 fused_ordering(82) 00:18:10.105 fused_ordering(83) 00:18:10.105 fused_ordering(84) 00:18:10.105 fused_ordering(85) 00:18:10.105 fused_ordering(86) 00:18:10.105 fused_ordering(87) 00:18:10.105 fused_ordering(88) 00:18:10.105 fused_ordering(89) 00:18:10.105 fused_ordering(90) 00:18:10.105 fused_ordering(91) 00:18:10.105 fused_ordering(92) 00:18:10.105 fused_ordering(93) 00:18:10.105 fused_ordering(94) 00:18:10.105 fused_ordering(95) 00:18:10.105 fused_ordering(96) 00:18:10.105 fused_ordering(97) 00:18:10.105 fused_ordering(98) 00:18:10.105 fused_ordering(99) 00:18:10.105 fused_ordering(100) 00:18:10.105 fused_ordering(101) 00:18:10.105 fused_ordering(102) 00:18:10.105 fused_ordering(103) 00:18:10.105 fused_ordering(104) 00:18:10.105 fused_ordering(105) 00:18:10.105 fused_ordering(106) 00:18:10.105 fused_ordering(107) 00:18:10.105 fused_ordering(108) 00:18:10.105 fused_ordering(109) 00:18:10.105 fused_ordering(110) 00:18:10.105 fused_ordering(111) 00:18:10.105 fused_ordering(112) 00:18:10.105 fused_ordering(113) 00:18:10.105 fused_ordering(114) 00:18:10.105 fused_ordering(115) 00:18:10.105 fused_ordering(116) 00:18:10.105 fused_ordering(117) 00:18:10.105 fused_ordering(118) 00:18:10.105 fused_ordering(119) 00:18:10.105 fused_ordering(120) 00:18:10.105 fused_ordering(121) 00:18:10.105 fused_ordering(122) 00:18:10.105 fused_ordering(123) 00:18:10.105 fused_ordering(124) 00:18:10.105 fused_ordering(125) 00:18:10.105 fused_ordering(126) 00:18:10.105 fused_ordering(127) 00:18:10.105 fused_ordering(128) 00:18:10.105 fused_ordering(129) 00:18:10.105 fused_ordering(130) 00:18:10.105 fused_ordering(131) 00:18:10.105 fused_ordering(132) 00:18:10.105 fused_ordering(133) 00:18:10.105 fused_ordering(134) 00:18:10.105 fused_ordering(135) 00:18:10.105 fused_ordering(136) 00:18:10.105 fused_ordering(137) 00:18:10.105 fused_ordering(138) 00:18:10.105 fused_ordering(139) 00:18:10.105 fused_ordering(140) 00:18:10.105 fused_ordering(141) 00:18:10.105 fused_ordering(142) 00:18:10.105 fused_ordering(143) 00:18:10.105 fused_ordering(144) 00:18:10.105 fused_ordering(145) 00:18:10.105 fused_ordering(146) 00:18:10.105 fused_ordering(147) 00:18:10.105 fused_ordering(148) 00:18:10.105 fused_ordering(149) 00:18:10.105 fused_ordering(150) 00:18:10.105 fused_ordering(151) 00:18:10.105 fused_ordering(152) 00:18:10.105 fused_ordering(153) 00:18:10.105 fused_ordering(154) 00:18:10.105 fused_ordering(155) 00:18:10.105 fused_ordering(156) 00:18:10.105 fused_ordering(157) 00:18:10.105 fused_ordering(158) 00:18:10.105 fused_ordering(159) 00:18:10.105 fused_ordering(160) 00:18:10.105 fused_ordering(161) 00:18:10.105 fused_ordering(162) 00:18:10.105 fused_ordering(163) 00:18:10.105 fused_ordering(164) 00:18:10.105 fused_ordering(165) 00:18:10.105 fused_ordering(166) 00:18:10.105 fused_ordering(167) 00:18:10.105 fused_ordering(168) 00:18:10.105 fused_ordering(169) 00:18:10.105 fused_ordering(170) 00:18:10.105 fused_ordering(171) 00:18:10.105 fused_ordering(172) 00:18:10.105 fused_ordering(173) 00:18:10.105 fused_ordering(174) 00:18:10.105 fused_ordering(175) 00:18:10.105 fused_ordering(176) 00:18:10.105 fused_ordering(177) 00:18:10.105 fused_ordering(178) 00:18:10.105 fused_ordering(179) 00:18:10.105 fused_ordering(180) 00:18:10.105 fused_ordering(181) 00:18:10.105 fused_ordering(182) 00:18:10.105 fused_ordering(183) 00:18:10.105 fused_ordering(184) 00:18:10.105 fused_ordering(185) 00:18:10.105 fused_ordering(186) 00:18:10.105 fused_ordering(187) 00:18:10.105 fused_ordering(188) 00:18:10.105 fused_ordering(189) 00:18:10.105 fused_ordering(190) 00:18:10.105 fused_ordering(191) 00:18:10.105 fused_ordering(192) 00:18:10.105 fused_ordering(193) 00:18:10.105 fused_ordering(194) 00:18:10.105 fused_ordering(195) 00:18:10.105 fused_ordering(196) 00:18:10.105 fused_ordering(197) 00:18:10.105 fused_ordering(198) 00:18:10.105 fused_ordering(199) 00:18:10.105 fused_ordering(200) 00:18:10.105 fused_ordering(201) 00:18:10.105 fused_ordering(202) 00:18:10.105 fused_ordering(203) 00:18:10.105 fused_ordering(204) 00:18:10.105 fused_ordering(205) 00:18:10.671 fused_ordering(206) 00:18:10.671 fused_ordering(207) 00:18:10.671 fused_ordering(208) 00:18:10.671 fused_ordering(209) 00:18:10.671 fused_ordering(210) 00:18:10.671 fused_ordering(211) 00:18:10.671 fused_ordering(212) 00:18:10.671 fused_ordering(213) 00:18:10.671 fused_ordering(214) 00:18:10.671 fused_ordering(215) 00:18:10.671 fused_ordering(216) 00:18:10.671 fused_ordering(217) 00:18:10.671 fused_ordering(218) 00:18:10.671 fused_ordering(219) 00:18:10.671 fused_ordering(220) 00:18:10.671 fused_ordering(221) 00:18:10.671 fused_ordering(222) 00:18:10.671 fused_ordering(223) 00:18:10.671 fused_ordering(224) 00:18:10.671 fused_ordering(225) 00:18:10.671 fused_ordering(226) 00:18:10.671 fused_ordering(227) 00:18:10.671 fused_ordering(228) 00:18:10.671 fused_ordering(229) 00:18:10.671 fused_ordering(230) 00:18:10.671 fused_ordering(231) 00:18:10.671 fused_ordering(232) 00:18:10.671 fused_ordering(233) 00:18:10.671 fused_ordering(234) 00:18:10.671 fused_ordering(235) 00:18:10.671 fused_ordering(236) 00:18:10.671 fused_ordering(237) 00:18:10.671 fused_ordering(238) 00:18:10.671 fused_ordering(239) 00:18:10.671 fused_ordering(240) 00:18:10.671 fused_ordering(241) 00:18:10.671 fused_ordering(242) 00:18:10.671 fused_ordering(243) 00:18:10.671 fused_ordering(244) 00:18:10.671 fused_ordering(245) 00:18:10.671 fused_ordering(246) 00:18:10.671 fused_ordering(247) 00:18:10.671 fused_ordering(248) 00:18:10.671 fused_ordering(249) 00:18:10.671 fused_ordering(250) 00:18:10.671 fused_ordering(251) 00:18:10.671 fused_ordering(252) 00:18:10.671 fused_ordering(253) 00:18:10.671 fused_ordering(254) 00:18:10.671 fused_ordering(255) 00:18:10.671 fused_ordering(256) 00:18:10.671 fused_ordering(257) 00:18:10.671 fused_ordering(258) 00:18:10.671 fused_ordering(259) 00:18:10.671 fused_ordering(260) 00:18:10.671 fused_ordering(261) 00:18:10.671 fused_ordering(262) 00:18:10.671 fused_ordering(263) 00:18:10.671 fused_ordering(264) 00:18:10.671 fused_ordering(265) 00:18:10.671 fused_ordering(266) 00:18:10.671 fused_ordering(267) 00:18:10.671 fused_ordering(268) 00:18:10.671 fused_ordering(269) 00:18:10.671 fused_ordering(270) 00:18:10.671 fused_ordering(271) 00:18:10.671 fused_ordering(272) 00:18:10.671 fused_ordering(273) 00:18:10.671 fused_ordering(274) 00:18:10.671 fused_ordering(275) 00:18:10.671 fused_ordering(276) 00:18:10.671 fused_ordering(277) 00:18:10.671 fused_ordering(278) 00:18:10.671 fused_ordering(279) 00:18:10.671 fused_ordering(280) 00:18:10.671 fused_ordering(281) 00:18:10.671 fused_ordering(282) 00:18:10.671 fused_ordering(283) 00:18:10.671 fused_ordering(284) 00:18:10.671 fused_ordering(285) 00:18:10.671 fused_ordering(286) 00:18:10.671 fused_ordering(287) 00:18:10.671 fused_ordering(288) 00:18:10.671 fused_ordering(289) 00:18:10.671 fused_ordering(290) 00:18:10.671 fused_ordering(291) 00:18:10.671 fused_ordering(292) 00:18:10.671 fused_ordering(293) 00:18:10.671 fused_ordering(294) 00:18:10.671 fused_ordering(295) 00:18:10.671 fused_ordering(296) 00:18:10.671 fused_ordering(297) 00:18:10.671 fused_ordering(298) 00:18:10.671 fused_ordering(299) 00:18:10.671 fused_ordering(300) 00:18:10.671 fused_ordering(301) 00:18:10.671 fused_ordering(302) 00:18:10.671 fused_ordering(303) 00:18:10.671 fused_ordering(304) 00:18:10.671 fused_ordering(305) 00:18:10.671 fused_ordering(306) 00:18:10.671 fused_ordering(307) 00:18:10.671 fused_ordering(308) 00:18:10.671 fused_ordering(309) 00:18:10.671 fused_ordering(310) 00:18:10.671 fused_ordering(311) 00:18:10.671 fused_ordering(312) 00:18:10.671 fused_ordering(313) 00:18:10.671 fused_ordering(314) 00:18:10.671 fused_ordering(315) 00:18:10.671 fused_ordering(316) 00:18:10.671 fused_ordering(317) 00:18:10.671 fused_ordering(318) 00:18:10.671 fused_ordering(319) 00:18:10.671 fused_ordering(320) 00:18:10.671 fused_ordering(321) 00:18:10.671 fused_ordering(322) 00:18:10.671 fused_ordering(323) 00:18:10.672 fused_ordering(324) 00:18:10.672 fused_ordering(325) 00:18:10.672 fused_ordering(326) 00:18:10.672 fused_ordering(327) 00:18:10.672 fused_ordering(328) 00:18:10.672 fused_ordering(329) 00:18:10.672 fused_ordering(330) 00:18:10.672 fused_ordering(331) 00:18:10.672 fused_ordering(332) 00:18:10.672 fused_ordering(333) 00:18:10.672 fused_ordering(334) 00:18:10.672 fused_ordering(335) 00:18:10.672 fused_ordering(336) 00:18:10.672 fused_ordering(337) 00:18:10.672 fused_ordering(338) 00:18:10.672 fused_ordering(339) 00:18:10.672 fused_ordering(340) 00:18:10.672 fused_ordering(341) 00:18:10.672 fused_ordering(342) 00:18:10.672 fused_ordering(343) 00:18:10.672 fused_ordering(344) 00:18:10.672 fused_ordering(345) 00:18:10.672 fused_ordering(346) 00:18:10.672 fused_ordering(347) 00:18:10.672 fused_ordering(348) 00:18:10.672 fused_ordering(349) 00:18:10.672 fused_ordering(350) 00:18:10.672 fused_ordering(351) 00:18:10.672 fused_ordering(352) 00:18:10.672 fused_ordering(353) 00:18:10.672 fused_ordering(354) 00:18:10.672 fused_ordering(355) 00:18:10.672 fused_ordering(356) 00:18:10.672 fused_ordering(357) 00:18:10.672 fused_ordering(358) 00:18:10.672 fused_ordering(359) 00:18:10.672 fused_ordering(360) 00:18:10.672 fused_ordering(361) 00:18:10.672 fused_ordering(362) 00:18:10.672 fused_ordering(363) 00:18:10.672 fused_ordering(364) 00:18:10.672 fused_ordering(365) 00:18:10.672 fused_ordering(366) 00:18:10.672 fused_ordering(367) 00:18:10.672 fused_ordering(368) 00:18:10.672 fused_ordering(369) 00:18:10.672 fused_ordering(370) 00:18:10.672 fused_ordering(371) 00:18:10.672 fused_ordering(372) 00:18:10.672 fused_ordering(373) 00:18:10.672 fused_ordering(374) 00:18:10.672 fused_ordering(375) 00:18:10.672 fused_ordering(376) 00:18:10.672 fused_ordering(377) 00:18:10.672 fused_ordering(378) 00:18:10.672 fused_ordering(379) 00:18:10.672 fused_ordering(380) 00:18:10.672 fused_ordering(381) 00:18:10.672 fused_ordering(382) 00:18:10.672 fused_ordering(383) 00:18:10.672 fused_ordering(384) 00:18:10.672 fused_ordering(385) 00:18:10.672 fused_ordering(386) 00:18:10.672 fused_ordering(387) 00:18:10.672 fused_ordering(388) 00:18:10.672 fused_ordering(389) 00:18:10.672 fused_ordering(390) 00:18:10.672 fused_ordering(391) 00:18:10.672 fused_ordering(392) 00:18:10.672 fused_ordering(393) 00:18:10.672 fused_ordering(394) 00:18:10.672 fused_ordering(395) 00:18:10.672 fused_ordering(396) 00:18:10.672 fused_ordering(397) 00:18:10.672 fused_ordering(398) 00:18:10.672 fused_ordering(399) 00:18:10.672 fused_ordering(400) 00:18:10.672 fused_ordering(401) 00:18:10.672 fused_ordering(402) 00:18:10.672 fused_ordering(403) 00:18:10.672 fused_ordering(404) 00:18:10.672 fused_ordering(405) 00:18:10.672 fused_ordering(406) 00:18:10.672 fused_ordering(407) 00:18:10.672 fused_ordering(408) 00:18:10.672 fused_ordering(409) 00:18:10.672 fused_ordering(410) 00:18:11.238 fused_ordering(411) 00:18:11.238 fused_ordering(412) 00:18:11.238 fused_ordering(413) 00:18:11.238 fused_ordering(414) 00:18:11.238 fused_ordering(415) 00:18:11.238 fused_ordering(416) 00:18:11.238 fused_ordering(417) 00:18:11.238 fused_ordering(418) 00:18:11.238 fused_ordering(419) 00:18:11.238 fused_ordering(420) 00:18:11.238 fused_ordering(421) 00:18:11.238 fused_ordering(422) 00:18:11.238 fused_ordering(423) 00:18:11.238 fused_ordering(424) 00:18:11.238 fused_ordering(425) 00:18:11.238 fused_ordering(426) 00:18:11.238 fused_ordering(427) 00:18:11.238 fused_ordering(428) 00:18:11.238 fused_ordering(429) 00:18:11.238 fused_ordering(430) 00:18:11.238 fused_ordering(431) 00:18:11.238 fused_ordering(432) 00:18:11.238 fused_ordering(433) 00:18:11.239 fused_ordering(434) 00:18:11.239 fused_ordering(435) 00:18:11.239 fused_ordering(436) 00:18:11.239 fused_ordering(437) 00:18:11.239 fused_ordering(438) 00:18:11.239 fused_ordering(439) 00:18:11.239 fused_ordering(440) 00:18:11.239 fused_ordering(441) 00:18:11.239 fused_ordering(442) 00:18:11.239 fused_ordering(443) 00:18:11.239 fused_ordering(444) 00:18:11.239 fused_ordering(445) 00:18:11.239 fused_ordering(446) 00:18:11.239 fused_ordering(447) 00:18:11.239 fused_ordering(448) 00:18:11.239 fused_ordering(449) 00:18:11.239 fused_ordering(450) 00:18:11.239 fused_ordering(451) 00:18:11.239 fused_ordering(452) 00:18:11.239 fused_ordering(453) 00:18:11.239 fused_ordering(454) 00:18:11.239 fused_ordering(455) 00:18:11.239 fused_ordering(456) 00:18:11.239 fused_ordering(457) 00:18:11.239 fused_ordering(458) 00:18:11.239 fused_ordering(459) 00:18:11.239 fused_ordering(460) 00:18:11.239 fused_ordering(461) 00:18:11.239 fused_ordering(462) 00:18:11.239 fused_ordering(463) 00:18:11.239 fused_ordering(464) 00:18:11.239 fused_ordering(465) 00:18:11.239 fused_ordering(466) 00:18:11.239 fused_ordering(467) 00:18:11.239 fused_ordering(468) 00:18:11.239 fused_ordering(469) 00:18:11.239 fused_ordering(470) 00:18:11.239 fused_ordering(471) 00:18:11.239 fused_ordering(472) 00:18:11.239 fused_ordering(473) 00:18:11.239 fused_ordering(474) 00:18:11.239 fused_ordering(475) 00:18:11.239 fused_ordering(476) 00:18:11.239 fused_ordering(477) 00:18:11.239 fused_ordering(478) 00:18:11.239 fused_ordering(479) 00:18:11.239 fused_ordering(480) 00:18:11.239 fused_ordering(481) 00:18:11.239 fused_ordering(482) 00:18:11.239 fused_ordering(483) 00:18:11.239 fused_ordering(484) 00:18:11.239 fused_ordering(485) 00:18:11.239 fused_ordering(486) 00:18:11.239 fused_ordering(487) 00:18:11.239 fused_ordering(488) 00:18:11.239 fused_ordering(489) 00:18:11.239 fused_ordering(490) 00:18:11.239 fused_ordering(491) 00:18:11.239 fused_ordering(492) 00:18:11.239 fused_ordering(493) 00:18:11.239 fused_ordering(494) 00:18:11.239 fused_ordering(495) 00:18:11.239 fused_ordering(496) 00:18:11.239 fused_ordering(497) 00:18:11.239 fused_ordering(498) 00:18:11.239 fused_ordering(499) 00:18:11.239 fused_ordering(500) 00:18:11.239 fused_ordering(501) 00:18:11.239 fused_ordering(502) 00:18:11.239 fused_ordering(503) 00:18:11.239 fused_ordering(504) 00:18:11.239 fused_ordering(505) 00:18:11.239 fused_ordering(506) 00:18:11.239 fused_ordering(507) 00:18:11.239 fused_ordering(508) 00:18:11.239 fused_ordering(509) 00:18:11.239 fused_ordering(510) 00:18:11.239 fused_ordering(511) 00:18:11.239 fused_ordering(512) 00:18:11.239 fused_ordering(513) 00:18:11.239 fused_ordering(514) 00:18:11.239 fused_ordering(515) 00:18:11.239 fused_ordering(516) 00:18:11.239 fused_ordering(517) 00:18:11.239 fused_ordering(518) 00:18:11.239 fused_ordering(519) 00:18:11.239 fused_ordering(520) 00:18:11.239 fused_ordering(521) 00:18:11.239 fused_ordering(522) 00:18:11.239 fused_ordering(523) 00:18:11.239 fused_ordering(524) 00:18:11.239 fused_ordering(525) 00:18:11.239 fused_ordering(526) 00:18:11.239 fused_ordering(527) 00:18:11.239 fused_ordering(528) 00:18:11.239 fused_ordering(529) 00:18:11.239 fused_ordering(530) 00:18:11.239 fused_ordering(531) 00:18:11.239 fused_ordering(532) 00:18:11.239 fused_ordering(533) 00:18:11.239 fused_ordering(534) 00:18:11.239 fused_ordering(535) 00:18:11.239 fused_ordering(536) 00:18:11.239 fused_ordering(537) 00:18:11.239 fused_ordering(538) 00:18:11.239 fused_ordering(539) 00:18:11.239 fused_ordering(540) 00:18:11.239 fused_ordering(541) 00:18:11.239 fused_ordering(542) 00:18:11.239 fused_ordering(543) 00:18:11.239 fused_ordering(544) 00:18:11.239 fused_ordering(545) 00:18:11.239 fused_ordering(546) 00:18:11.239 fused_ordering(547) 00:18:11.239 fused_ordering(548) 00:18:11.239 fused_ordering(549) 00:18:11.239 fused_ordering(550) 00:18:11.239 fused_ordering(551) 00:18:11.239 fused_ordering(552) 00:18:11.239 fused_ordering(553) 00:18:11.239 fused_ordering(554) 00:18:11.239 fused_ordering(555) 00:18:11.239 fused_ordering(556) 00:18:11.239 fused_ordering(557) 00:18:11.239 fused_ordering(558) 00:18:11.239 fused_ordering(559) 00:18:11.239 fused_ordering(560) 00:18:11.239 fused_ordering(561) 00:18:11.239 fused_ordering(562) 00:18:11.239 fused_ordering(563) 00:18:11.239 fused_ordering(564) 00:18:11.239 fused_ordering(565) 00:18:11.239 fused_ordering(566) 00:18:11.239 fused_ordering(567) 00:18:11.239 fused_ordering(568) 00:18:11.239 fused_ordering(569) 00:18:11.239 fused_ordering(570) 00:18:11.239 fused_ordering(571) 00:18:11.239 fused_ordering(572) 00:18:11.239 fused_ordering(573) 00:18:11.239 fused_ordering(574) 00:18:11.239 fused_ordering(575) 00:18:11.239 fused_ordering(576) 00:18:11.239 fused_ordering(577) 00:18:11.239 fused_ordering(578) 00:18:11.239 fused_ordering(579) 00:18:11.239 fused_ordering(580) 00:18:11.239 fused_ordering(581) 00:18:11.239 fused_ordering(582) 00:18:11.239 fused_ordering(583) 00:18:11.239 fused_ordering(584) 00:18:11.239 fused_ordering(585) 00:18:11.239 fused_ordering(586) 00:18:11.239 fused_ordering(587) 00:18:11.239 fused_ordering(588) 00:18:11.239 fused_ordering(589) 00:18:11.239 fused_ordering(590) 00:18:11.239 fused_ordering(591) 00:18:11.239 fused_ordering(592) 00:18:11.239 fused_ordering(593) 00:18:11.239 fused_ordering(594) 00:18:11.239 fused_ordering(595) 00:18:11.239 fused_ordering(596) 00:18:11.239 fused_ordering(597) 00:18:11.239 fused_ordering(598) 00:18:11.239 fused_ordering(599) 00:18:11.239 fused_ordering(600) 00:18:11.239 fused_ordering(601) 00:18:11.239 fused_ordering(602) 00:18:11.239 fused_ordering(603) 00:18:11.239 fused_ordering(604) 00:18:11.239 fused_ordering(605) 00:18:11.239 fused_ordering(606) 00:18:11.239 fused_ordering(607) 00:18:11.239 fused_ordering(608) 00:18:11.239 fused_ordering(609) 00:18:11.239 fused_ordering(610) 00:18:11.239 fused_ordering(611) 00:18:11.239 fused_ordering(612) 00:18:11.239 fused_ordering(613) 00:18:11.239 fused_ordering(614) 00:18:11.239 fused_ordering(615) 00:18:11.806 fused_ordering(616) 00:18:11.806 fused_ordering(617) 00:18:11.806 fused_ordering(618) 00:18:11.806 fused_ordering(619) 00:18:11.806 fused_ordering(620) 00:18:11.806 fused_ordering(621) 00:18:11.806 fused_ordering(622) 00:18:11.806 fused_ordering(623) 00:18:11.806 fused_ordering(624) 00:18:11.806 fused_ordering(625) 00:18:11.806 fused_ordering(626) 00:18:11.806 fused_ordering(627) 00:18:11.806 fused_ordering(628) 00:18:11.806 fused_ordering(629) 00:18:11.806 fused_ordering(630) 00:18:11.806 fused_ordering(631) 00:18:11.806 fused_ordering(632) 00:18:11.806 fused_ordering(633) 00:18:11.806 fused_ordering(634) 00:18:11.806 fused_ordering(635) 00:18:11.806 fused_ordering(636) 00:18:11.806 fused_ordering(637) 00:18:11.806 fused_ordering(638) 00:18:11.806 fused_ordering(639) 00:18:11.806 fused_ordering(640) 00:18:11.806 fused_ordering(641) 00:18:11.806 fused_ordering(642) 00:18:11.806 fused_ordering(643) 00:18:11.806 fused_ordering(644) 00:18:11.806 fused_ordering(645) 00:18:11.806 fused_ordering(646) 00:18:11.806 fused_ordering(647) 00:18:11.806 fused_ordering(648) 00:18:11.806 fused_ordering(649) 00:18:11.806 fused_ordering(650) 00:18:11.806 fused_ordering(651) 00:18:11.806 fused_ordering(652) 00:18:11.806 fused_ordering(653) 00:18:11.806 fused_ordering(654) 00:18:11.806 fused_ordering(655) 00:18:11.806 fused_ordering(656) 00:18:11.806 fused_ordering(657) 00:18:11.806 fused_ordering(658) 00:18:11.806 fused_ordering(659) 00:18:11.806 fused_ordering(660) 00:18:11.806 fused_ordering(661) 00:18:11.806 fused_ordering(662) 00:18:11.806 fused_ordering(663) 00:18:11.806 fused_ordering(664) 00:18:11.806 fused_ordering(665) 00:18:11.806 fused_ordering(666) 00:18:11.806 fused_ordering(667) 00:18:11.806 fused_ordering(668) 00:18:11.806 fused_ordering(669) 00:18:11.806 fused_ordering(670) 00:18:11.806 fused_ordering(671) 00:18:11.806 fused_ordering(672) 00:18:11.806 fused_ordering(673) 00:18:11.806 fused_ordering(674) 00:18:11.806 fused_ordering(675) 00:18:11.806 fused_ordering(676) 00:18:11.806 fused_ordering(677) 00:18:11.806 fused_ordering(678) 00:18:11.806 fused_ordering(679) 00:18:11.806 fused_ordering(680) 00:18:11.806 fused_ordering(681) 00:18:11.806 fused_ordering(682) 00:18:11.806 fused_ordering(683) 00:18:11.806 fused_ordering(684) 00:18:11.806 fused_ordering(685) 00:18:11.806 fused_ordering(686) 00:18:11.806 fused_ordering(687) 00:18:11.806 fused_ordering(688) 00:18:11.806 fused_ordering(689) 00:18:11.806 fused_ordering(690) 00:18:11.806 fused_ordering(691) 00:18:11.806 fused_ordering(692) 00:18:11.806 fused_ordering(693) 00:18:11.806 fused_ordering(694) 00:18:11.806 fused_ordering(695) 00:18:11.806 fused_ordering(696) 00:18:11.806 fused_ordering(697) 00:18:11.806 fused_ordering(698) 00:18:11.806 fused_ordering(699) 00:18:11.806 fused_ordering(700) 00:18:11.806 fused_ordering(701) 00:18:11.806 fused_ordering(702) 00:18:11.806 fused_ordering(703) 00:18:11.806 fused_ordering(704) 00:18:11.806 fused_ordering(705) 00:18:11.806 fused_ordering(706) 00:18:11.806 fused_ordering(707) 00:18:11.806 fused_ordering(708) 00:18:11.806 fused_ordering(709) 00:18:11.806 fused_ordering(710) 00:18:11.806 fused_ordering(711) 00:18:11.806 fused_ordering(712) 00:18:11.806 fused_ordering(713) 00:18:11.806 fused_ordering(714) 00:18:11.806 fused_ordering(715) 00:18:11.806 fused_ordering(716) 00:18:11.806 fused_ordering(717) 00:18:11.806 fused_ordering(718) 00:18:11.806 fused_ordering(719) 00:18:11.806 fused_ordering(720) 00:18:11.806 fused_ordering(721) 00:18:11.806 fused_ordering(722) 00:18:11.806 fused_ordering(723) 00:18:11.806 fused_ordering(724) 00:18:11.806 fused_ordering(725) 00:18:11.806 fused_ordering(726) 00:18:11.806 fused_ordering(727) 00:18:11.806 fused_ordering(728) 00:18:11.806 fused_ordering(729) 00:18:11.806 fused_ordering(730) 00:18:11.806 fused_ordering(731) 00:18:11.806 fused_ordering(732) 00:18:11.806 fused_ordering(733) 00:18:11.806 fused_ordering(734) 00:18:11.806 fused_ordering(735) 00:18:11.806 fused_ordering(736) 00:18:11.806 fused_ordering(737) 00:18:11.806 fused_ordering(738) 00:18:11.806 fused_ordering(739) 00:18:11.806 fused_ordering(740) 00:18:11.806 fused_ordering(741) 00:18:11.806 fused_ordering(742) 00:18:11.806 fused_ordering(743) 00:18:11.806 fused_ordering(744) 00:18:11.806 fused_ordering(745) 00:18:11.806 fused_ordering(746) 00:18:11.806 fused_ordering(747) 00:18:11.806 fused_ordering(748) 00:18:11.806 fused_ordering(749) 00:18:11.806 fused_ordering(750) 00:18:11.806 fused_ordering(751) 00:18:11.806 fused_ordering(752) 00:18:11.806 fused_ordering(753) 00:18:11.806 fused_ordering(754) 00:18:11.806 fused_ordering(755) 00:18:11.806 fused_ordering(756) 00:18:11.806 fused_ordering(757) 00:18:11.806 fused_ordering(758) 00:18:11.806 fused_ordering(759) 00:18:11.806 fused_ordering(760) 00:18:11.806 fused_ordering(761) 00:18:11.806 fused_ordering(762) 00:18:11.806 fused_ordering(763) 00:18:11.806 fused_ordering(764) 00:18:11.806 fused_ordering(765) 00:18:11.806 fused_ordering(766) 00:18:11.806 fused_ordering(767) 00:18:11.806 fused_ordering(768) 00:18:11.806 fused_ordering(769) 00:18:11.806 fused_ordering(770) 00:18:11.806 fused_ordering(771) 00:18:11.806 fused_ordering(772) 00:18:11.806 fused_ordering(773) 00:18:11.806 fused_ordering(774) 00:18:11.806 fused_ordering(775) 00:18:11.806 fused_ordering(776) 00:18:11.806 fused_ordering(777) 00:18:11.806 fused_ordering(778) 00:18:11.806 fused_ordering(779) 00:18:11.806 fused_ordering(780) 00:18:11.806 fused_ordering(781) 00:18:11.806 fused_ordering(782) 00:18:11.806 fused_ordering(783) 00:18:11.806 fused_ordering(784) 00:18:11.806 fused_ordering(785) 00:18:11.806 fused_ordering(786) 00:18:11.806 fused_ordering(787) 00:18:11.807 fused_ordering(788) 00:18:11.807 fused_ordering(789) 00:18:11.807 fused_ordering(790) 00:18:11.807 fused_ordering(791) 00:18:11.807 fused_ordering(792) 00:18:11.807 fused_ordering(793) 00:18:11.807 fused_ordering(794) 00:18:11.807 fused_ordering(795) 00:18:11.807 fused_ordering(796) 00:18:11.807 fused_ordering(797) 00:18:11.807 fused_ordering(798) 00:18:11.807 fused_ordering(799) 00:18:11.807 fused_ordering(800) 00:18:11.807 fused_ordering(801) 00:18:11.807 fused_ordering(802) 00:18:11.807 fused_ordering(803) 00:18:11.807 fused_ordering(804) 00:18:11.807 fused_ordering(805) 00:18:11.807 fused_ordering(806) 00:18:11.807 fused_ordering(807) 00:18:11.807 fused_ordering(808) 00:18:11.807 fused_ordering(809) 00:18:11.807 fused_ordering(810) 00:18:11.807 fused_ordering(811) 00:18:11.807 fused_ordering(812) 00:18:11.807 fused_ordering(813) 00:18:11.807 fused_ordering(814) 00:18:11.807 fused_ordering(815) 00:18:11.807 fused_ordering(816) 00:18:11.807 fused_ordering(817) 00:18:11.807 fused_ordering(818) 00:18:11.807 fused_ordering(819) 00:18:11.807 fused_ordering(820) 00:18:12.742 fused_ordering(821) 00:18:12.742 fused_ordering(822) 00:18:12.742 fused_ordering(823) 00:18:12.742 fused_ordering(824) 00:18:12.742 fused_ordering(825) 00:18:12.742 fused_ordering(826) 00:18:12.742 fused_ordering(827) 00:18:12.742 fused_ordering(828) 00:18:12.742 fused_ordering(829) 00:18:12.742 fused_ordering(830) 00:18:12.742 fused_ordering(831) 00:18:12.742 fused_ordering(832) 00:18:12.742 fused_ordering(833) 00:18:12.742 fused_ordering(834) 00:18:12.742 fused_ordering(835) 00:18:12.742 fused_ordering(836) 00:18:12.742 fused_ordering(837) 00:18:12.742 fused_ordering(838) 00:18:12.742 fused_ordering(839) 00:18:12.742 fused_ordering(840) 00:18:12.742 fused_ordering(841) 00:18:12.742 fused_ordering(842) 00:18:12.742 fused_ordering(843) 00:18:12.742 fused_ordering(844) 00:18:12.742 fused_ordering(845) 00:18:12.742 fused_ordering(846) 00:18:12.742 fused_ordering(847) 00:18:12.742 fused_ordering(848) 00:18:12.742 fused_ordering(849) 00:18:12.742 fused_ordering(850) 00:18:12.742 fused_ordering(851) 00:18:12.742 fused_ordering(852) 00:18:12.742 fused_ordering(853) 00:18:12.742 fused_ordering(854) 00:18:12.742 fused_ordering(855) 00:18:12.742 fused_ordering(856) 00:18:12.742 fused_ordering(857) 00:18:12.742 fused_ordering(858) 00:18:12.742 fused_ordering(859) 00:18:12.742 fused_ordering(860) 00:18:12.742 fused_ordering(861) 00:18:12.742 fused_ordering(862) 00:18:12.742 fused_ordering(863) 00:18:12.742 fused_ordering(864) 00:18:12.742 fused_ordering(865) 00:18:12.742 fused_ordering(866) 00:18:12.742 fused_ordering(867) 00:18:12.742 fused_ordering(868) 00:18:12.742 fused_ordering(869) 00:18:12.742 fused_ordering(870) 00:18:12.742 fused_ordering(871) 00:18:12.742 fused_ordering(872) 00:18:12.742 fused_ordering(873) 00:18:12.742 fused_ordering(874) 00:18:12.742 fused_ordering(875) 00:18:12.742 fused_ordering(876) 00:18:12.742 fused_ordering(877) 00:18:12.742 fused_ordering(878) 00:18:12.742 fused_ordering(879) 00:18:12.742 fused_ordering(880) 00:18:12.742 fused_ordering(881) 00:18:12.742 fused_ordering(882) 00:18:12.742 fused_ordering(883) 00:18:12.742 fused_ordering(884) 00:18:12.742 fused_ordering(885) 00:18:12.742 fused_ordering(886) 00:18:12.742 fused_ordering(887) 00:18:12.742 fused_ordering(888) 00:18:12.742 fused_ordering(889) 00:18:12.742 fused_ordering(890) 00:18:12.742 fused_ordering(891) 00:18:12.742 fused_ordering(892) 00:18:12.742 fused_ordering(893) 00:18:12.742 fused_ordering(894) 00:18:12.742 fused_ordering(895) 00:18:12.742 fused_ordering(896) 00:18:12.742 fused_ordering(897) 00:18:12.742 fused_ordering(898) 00:18:12.742 fused_ordering(899) 00:18:12.742 fused_ordering(900) 00:18:12.742 fused_ordering(901) 00:18:12.742 fused_ordering(902) 00:18:12.742 fused_ordering(903) 00:18:12.742 fused_ordering(904) 00:18:12.742 fused_ordering(905) 00:18:12.742 fused_ordering(906) 00:18:12.742 fused_ordering(907) 00:18:12.742 fused_ordering(908) 00:18:12.742 fused_ordering(909) 00:18:12.742 fused_ordering(910) 00:18:12.742 fused_ordering(911) 00:18:12.742 fused_ordering(912) 00:18:12.742 fused_ordering(913) 00:18:12.742 fused_ordering(914) 00:18:12.742 fused_ordering(915) 00:18:12.742 fused_ordering(916) 00:18:12.742 fused_ordering(917) 00:18:12.742 fused_ordering(918) 00:18:12.742 fused_ordering(919) 00:18:12.742 fused_ordering(920) 00:18:12.742 fused_ordering(921) 00:18:12.742 fused_ordering(922) 00:18:12.742 fused_ordering(923) 00:18:12.742 fused_ordering(924) 00:18:12.742 fused_ordering(925) 00:18:12.742 fused_ordering(926) 00:18:12.742 fused_ordering(927) 00:18:12.742 fused_ordering(928) 00:18:12.742 fused_ordering(929) 00:18:12.742 fused_ordering(930) 00:18:12.742 fused_ordering(931) 00:18:12.742 fused_ordering(932) 00:18:12.742 fused_ordering(933) 00:18:12.742 fused_ordering(934) 00:18:12.742 fused_ordering(935) 00:18:12.742 fused_ordering(936) 00:18:12.742 fused_ordering(937) 00:18:12.742 fused_ordering(938) 00:18:12.742 fused_ordering(939) 00:18:12.742 fused_ordering(940) 00:18:12.742 fused_ordering(941) 00:18:12.742 fused_ordering(942) 00:18:12.742 fused_ordering(943) 00:18:12.742 fused_ordering(944) 00:18:12.742 fused_ordering(945) 00:18:12.742 fused_ordering(946) 00:18:12.742 fused_ordering(947) 00:18:12.742 fused_ordering(948) 00:18:12.742 fused_ordering(949) 00:18:12.742 fused_ordering(950) 00:18:12.742 fused_ordering(951) 00:18:12.742 fused_ordering(952) 00:18:12.742 fused_ordering(953) 00:18:12.742 fused_ordering(954) 00:18:12.742 fused_ordering(955) 00:18:12.742 fused_ordering(956) 00:18:12.742 fused_ordering(957) 00:18:12.742 fused_ordering(958) 00:18:12.742 fused_ordering(959) 00:18:12.742 fused_ordering(960) 00:18:12.742 fused_ordering(961) 00:18:12.742 fused_ordering(962) 00:18:12.742 fused_ordering(963) 00:18:12.742 fused_ordering(964) 00:18:12.742 fused_ordering(965) 00:18:12.742 fused_ordering(966) 00:18:12.742 fused_ordering(967) 00:18:12.742 fused_ordering(968) 00:18:12.742 fused_ordering(969) 00:18:12.742 fused_ordering(970) 00:18:12.742 fused_ordering(971) 00:18:12.742 fused_ordering(972) 00:18:12.742 fused_ordering(973) 00:18:12.742 fused_ordering(974) 00:18:12.742 fused_ordering(975) 00:18:12.742 fused_ordering(976) 00:18:12.742 fused_ordering(977) 00:18:12.742 fused_ordering(978) 00:18:12.742 fused_ordering(979) 00:18:12.742 fused_ordering(980) 00:18:12.742 fused_ordering(981) 00:18:12.742 fused_ordering(982) 00:18:12.742 fused_ordering(983) 00:18:12.742 fused_ordering(984) 00:18:12.742 fused_ordering(985) 00:18:12.742 fused_ordering(986) 00:18:12.742 fused_ordering(987) 00:18:12.742 fused_ordering(988) 00:18:12.742 fused_ordering(989) 00:18:12.742 fused_ordering(990) 00:18:12.742 fused_ordering(991) 00:18:12.742 fused_ordering(992) 00:18:12.742 fused_ordering(993) 00:18:12.742 fused_ordering(994) 00:18:12.742 fused_ordering(995) 00:18:12.742 fused_ordering(996) 00:18:12.742 fused_ordering(997) 00:18:12.742 fused_ordering(998) 00:18:12.742 fused_ordering(999) 00:18:12.742 fused_ordering(1000) 00:18:12.742 fused_ordering(1001) 00:18:12.742 fused_ordering(1002) 00:18:12.742 fused_ordering(1003) 00:18:12.742 fused_ordering(1004) 00:18:12.742 fused_ordering(1005) 00:18:12.742 fused_ordering(1006) 00:18:12.742 fused_ordering(1007) 00:18:12.742 fused_ordering(1008) 00:18:12.742 fused_ordering(1009) 00:18:12.742 fused_ordering(1010) 00:18:12.742 fused_ordering(1011) 00:18:12.742 fused_ordering(1012) 00:18:12.742 fused_ordering(1013) 00:18:12.742 fused_ordering(1014) 00:18:12.742 fused_ordering(1015) 00:18:12.742 fused_ordering(1016) 00:18:12.742 fused_ordering(1017) 00:18:12.742 fused_ordering(1018) 00:18:12.742 fused_ordering(1019) 00:18:12.742 fused_ordering(1020) 00:18:12.742 fused_ordering(1021) 00:18:12.742 fused_ordering(1022) 00:18:12.742 fused_ordering(1023) 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.742 rmmod nvme_tcp 00:18:12.742 rmmod nvme_fabrics 00:18:12.742 rmmod nvme_keyring 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:12.742 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3453301 ']' 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3453301 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3453301 ']' 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3453301 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3453301 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3453301' 00:18:12.743 killing process with pid 3453301 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3453301 00:18:12.743 23:51:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3453301 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.118 23:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:16.025 00:18:16.025 real 0m10.080s 00:18:16.025 user 0m8.398s 00:18:16.025 sys 0m3.637s 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:16.025 ************************************ 00:18:16.025 END TEST nvmf_fused_ordering 00:18:16.025 ************************************ 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.025 ************************************ 00:18:16.025 START TEST nvmf_ns_masking 00:18:16.025 ************************************ 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:16.025 * Looking for test storage... 00:18:16.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:18:16.025 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.284 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.285 --rc genhtml_branch_coverage=1 00:18:16.285 --rc genhtml_function_coverage=1 00:18:16.285 --rc genhtml_legend=1 00:18:16.285 --rc geninfo_all_blocks=1 00:18:16.285 --rc geninfo_unexecuted_blocks=1 00:18:16.285 00:18:16.285 ' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.285 --rc genhtml_branch_coverage=1 00:18:16.285 --rc genhtml_function_coverage=1 00:18:16.285 --rc genhtml_legend=1 00:18:16.285 --rc geninfo_all_blocks=1 00:18:16.285 --rc geninfo_unexecuted_blocks=1 00:18:16.285 00:18:16.285 ' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.285 --rc genhtml_branch_coverage=1 00:18:16.285 --rc genhtml_function_coverage=1 00:18:16.285 --rc genhtml_legend=1 00:18:16.285 --rc geninfo_all_blocks=1 00:18:16.285 --rc geninfo_unexecuted_blocks=1 00:18:16.285 00:18:16.285 ' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.285 --rc genhtml_branch_coverage=1 00:18:16.285 --rc genhtml_function_coverage=1 00:18:16.285 --rc genhtml_legend=1 00:18:16.285 --rc geninfo_all_blocks=1 00:18:16.285 --rc geninfo_unexecuted_blocks=1 00:18:16.285 00:18:16.285 ' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=767c81ad-6cfe-4354-bd2d-25a4bdb8ada6 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=87f3c0b7-2896-4021-b225-ab00eca9e670 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5896eff1-09bf-44b5-ae7e-48ef1c303495 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:16.285 23:51:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:18.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:18.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.188 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:18.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:18.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.189 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:18.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:18:18.447 00:18:18.447 --- 10.0.0.2 ping statistics --- 00:18:18.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.447 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:18:18.447 00:18:18.447 --- 10.0.0.1 ping statistics --- 00:18:18.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.447 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3455887 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3455887 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3455887 ']' 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:18.447 23:51:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.447 [2024-11-09 23:51:44.590453] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:18:18.447 [2024-11-09 23:51:44.590618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.705 [2024-11-09 23:51:44.736887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.705 [2024-11-09 23:51:44.870180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.705 [2024-11-09 23:51:44.870267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.705 [2024-11-09 23:51:44.870297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.705 [2024-11-09 23:51:44.870322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.705 [2024-11-09 23:51:44.870342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.705 [2024-11-09 23:51:44.871964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.640 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:19.640 [2024-11-09 23:51:45.826382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.898 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:19.898 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:19.898 23:51:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:20.156 Malloc1 00:18:20.156 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:20.414 Malloc2 00:18:20.415 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:20.981 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:21.239 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.497 [2024-11-09 23:51:47.538632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.497 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:21.497 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5896eff1-09bf-44b5-ae7e-48ef1c303495 -a 10.0.0.2 -s 4420 -i 4 00:18:21.754 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.754 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:21.754 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.754 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:21.754 23:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.653 [ 0]:0x1 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=afeebfecbeaa45ef9a5a317dbad3d9e5 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ afeebfecbeaa45ef9a5a317dbad3d9e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.653 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:23.912 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:23.912 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.912 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.912 [ 0]:0x1 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=afeebfecbeaa45ef9a5a317dbad3d9e5 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ afeebfecbeaa45ef9a5a317dbad3d9e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.170 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.170 [ 1]:0x2 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.171 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.429 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:24.995 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:24.995 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5896eff1-09bf-44b5-ae7e-48ef1c303495 -a 10.0.0.2 -s 4420 -i 4 00:18:24.995 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:24.995 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:24.995 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.995 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:18:24.995 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:18:24.995 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.523 [ 0]:0x2 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:27.523 [ 0]:0x1 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=afeebfecbeaa45ef9a5a317dbad3d9e5 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ afeebfecbeaa45ef9a5a317dbad3d9e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:27.523 [ 1]:0x2 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.523 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:27.782 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:27.782 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.782 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:28.040 [ 0]:0x2 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.040 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:28.299 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:28.299 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5896eff1-09bf-44b5-ae7e-48ef1c303495 -a 10.0.0.2 -s 4420 -i 4 00:18:28.557 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:28.557 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:28.557 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.557 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:28.557 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:28.557 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.087 [ 0]:0x1 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=afeebfecbeaa45ef9a5a317dbad3d9e5 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ afeebfecbeaa45ef9a5a317dbad3d9e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:31.087 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.088 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.088 [ 1]:0x2 00:18:31.088 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.088 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.088 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:31.088 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.088 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.088 [ 0]:0x2 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:31.088 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:31.655 [2024-11-09 23:51:57.557649] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:31.655 request: 00:18:31.655 { 00:18:31.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.655 "nsid": 2, 00:18:31.655 "host": "nqn.2016-06.io.spdk:host1", 00:18:31.655 "method": "nvmf_ns_remove_host", 00:18:31.655 "req_id": 1 00:18:31.655 } 00:18:31.655 Got JSON-RPC error response 00:18:31.655 response: 00:18:31.655 { 00:18:31.655 "code": -32602, 00:18:31.655 "message": "Invalid parameters" 00:18:31.655 } 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:31.655 [ 0]:0x2 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a2f28d165944416a6ec99eb7b766059 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a2f28d165944416a6ec99eb7b766059 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:31.655 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.913 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3457644 00:18:31.913 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:31.913 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.913 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3457644 /var/tmp/host.sock 00:18:31.913 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3457644 ']' 00:18:31.914 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:18:31.914 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.914 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:31.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:31.914 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.914 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.914 [2024-11-09 23:51:57.973030] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:18:31.914 [2024-11-09 23:51:57.973173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457644 ] 00:18:31.914 [2024-11-09 23:51:58.114125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.172 [2024-11-09 23:51:58.256014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.107 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.107 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:33.107 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.365 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:33.623 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 767c81ad-6cfe-4354-bd2d-25a4bdb8ada6 00:18:33.623 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:33.623 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 767C81AD6CFE4354BD2D25A4BDB8ADA6 -i 00:18:34.188 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 87f3c0b7-2896-4021-b225-ab00eca9e670 00:18:34.188 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:34.188 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 87F3C0B728964021B225AB00ECA9E670 -i 00:18:34.446 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:34.704 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:34.961 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:34.961 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:35.550 nvme0n1 00:18:35.550 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:35.550 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:35.861 nvme1n2 00:18:35.861 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:35.861 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:35.861 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:35.861 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:35.861 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:36.175 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:36.175 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:36.175 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:36.175 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:36.433 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 767c81ad-6cfe-4354-bd2d-25a4bdb8ada6 == \7\6\7\c\8\1\a\d\-\6\c\f\e\-\4\3\5\4\-\b\d\2\d\-\2\5\a\4\b\d\b\8\a\d\a\6 ]] 00:18:36.433 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:36.433 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:36.433 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:36.691 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 87f3c0b7-2896-4021-b225-ab00eca9e670 == \8\7\f\3\c\0\b\7\-\2\8\9\6\-\4\0\2\1\-\b\2\2\5\-\a\b\0\0\e\c\a\9\e\6\7\0 ]] 00:18:36.691 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.948 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:37.205 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 767c81ad-6cfe-4354-bd2d-25a4bdb8ada6 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 767C81AD6CFE4354BD2D25A4BDB8ADA6 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 767C81AD6CFE4354BD2D25A4BDB8ADA6 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:37.206 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 767C81AD6CFE4354BD2D25A4BDB8ADA6 00:18:37.464 [2024-11-09 23:52:03.544906] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:37.464 [2024-11-09 23:52:03.544978] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:37.464 [2024-11-09 23:52:03.545005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.464 request: 00:18:37.464 { 00:18:37.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.464 "namespace": { 00:18:37.464 "bdev_name": "invalid", 00:18:37.464 "nsid": 1, 00:18:37.464 "nguid": "767C81AD6CFE4354BD2D25A4BDB8ADA6", 00:18:37.464 "no_auto_visible": false 00:18:37.464 }, 00:18:37.464 "method": "nvmf_subsystem_add_ns", 00:18:37.464 "req_id": 1 00:18:37.464 } 00:18:37.464 Got JSON-RPC error response 00:18:37.464 response: 00:18:37.464 { 00:18:37.464 "code": -32602, 00:18:37.464 "message": "Invalid parameters" 00:18:37.464 } 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 767c81ad-6cfe-4354-bd2d-25a4bdb8ada6 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:37.464 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 767C81AD6CFE4354BD2D25A4BDB8ADA6 -i 00:18:37.723 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:40.250 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:40.250 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:40.250 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3457644 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3457644 ']' 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3457644 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3457644 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3457644' 00:18:40.250 killing process with pid 3457644 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3457644 00:18:40.250 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3457644 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.780 rmmod nvme_tcp 00:18:42.780 rmmod nvme_fabrics 00:18:42.780 rmmod nvme_keyring 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3455887 ']' 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3455887 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3455887 ']' 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3455887 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3455887 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3455887' 00:18:42.780 killing process with pid 3455887 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3455887 00:18:42.780 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3455887 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.681 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.587 00:18:46.587 real 0m30.311s 00:18:46.587 user 0m45.130s 00:18:46.587 sys 0m4.962s 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:46.587 ************************************ 00:18:46.587 END TEST nvmf_ns_masking 00:18:46.587 ************************************ 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:46.587 23:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.587 ************************************ 00:18:46.587 START TEST nvmf_nvme_cli 00:18:46.587 ************************************ 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:46.588 * Looking for test storage... 00:18:46.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.588 --rc genhtml_branch_coverage=1 00:18:46.588 --rc genhtml_function_coverage=1 00:18:46.588 --rc genhtml_legend=1 00:18:46.588 --rc geninfo_all_blocks=1 00:18:46.588 --rc geninfo_unexecuted_blocks=1 00:18:46.588 00:18:46.588 ' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.588 --rc genhtml_branch_coverage=1 00:18:46.588 --rc genhtml_function_coverage=1 00:18:46.588 --rc genhtml_legend=1 00:18:46.588 --rc geninfo_all_blocks=1 00:18:46.588 --rc geninfo_unexecuted_blocks=1 00:18:46.588 00:18:46.588 ' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.588 --rc genhtml_branch_coverage=1 00:18:46.588 --rc genhtml_function_coverage=1 00:18:46.588 --rc genhtml_legend=1 00:18:46.588 --rc geninfo_all_blocks=1 00:18:46.588 --rc geninfo_unexecuted_blocks=1 00:18:46.588 00:18:46.588 ' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.588 --rc genhtml_branch_coverage=1 00:18:46.588 --rc genhtml_function_coverage=1 00:18:46.588 --rc genhtml_legend=1 00:18:46.588 --rc geninfo_all_blocks=1 00:18:46.588 --rc geninfo_unexecuted_blocks=1 00:18:46.588 00:18:46.588 ' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.588 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.589 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:49.121 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:49.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:18:49.122 00:18:49.122 --- 10.0.0.2 ping statistics --- 00:18:49.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.122 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:18:49.122 00:18:49.122 --- 10.0.0.1 ping statistics --- 00:18:49.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.122 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3461071 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3461071 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3461071 ']' 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:49.122 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:49.122 [2024-11-09 23:52:15.085361] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:18:49.122 [2024-11-09 23:52:15.085517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.122 [2024-11-09 23:52:15.259517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.381 [2024-11-09 23:52:15.390311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.381 [2024-11-09 23:52:15.390388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.381 [2024-11-09 23:52:15.390409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.381 [2024-11-09 23:52:15.390433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.381 [2024-11-09 23:52:15.390450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.381 [2024-11-09 23:52:15.392940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.381 [2024-11-09 23:52:15.392994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.381 [2024-11-09 23:52:15.393039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.381 [2024-11-09 23:52:15.393060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.948 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:49.948 [2024-11-09 23:52:16.145623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 Malloc0 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 Malloc1 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 [2024-11-09 23:52:16.342454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.206 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:50.465 00:18:50.465 Discovery Log Number of Records 2, Generation counter 2 00:18:50.465 =====Discovery Log Entry 0====== 00:18:50.465 trtype: tcp 00:18:50.465 adrfam: ipv4 00:18:50.465 subtype: current discovery subsystem 00:18:50.465 treq: not required 00:18:50.465 portid: 0 00:18:50.465 trsvcid: 4420 00:18:50.465 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:50.465 traddr: 10.0.0.2 00:18:50.465 eflags: explicit discovery connections, duplicate discovery information 00:18:50.465 sectype: none 00:18:50.465 =====Discovery Log Entry 1====== 00:18:50.465 trtype: tcp 00:18:50.465 adrfam: ipv4 00:18:50.465 subtype: nvme subsystem 00:18:50.465 treq: not required 00:18:50.465 portid: 0 00:18:50.465 trsvcid: 4420 00:18:50.465 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:50.465 traddr: 10.0.0.2 00:18:50.465 eflags: none 00:18:50.465 sectype: none 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:50.465 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.031 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:51.031 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:18:51.031 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.031 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:51.031 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:51.032 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:53.562 /dev/nvme0n2 ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:53.562 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.820 rmmod nvme_tcp 00:18:53.820 rmmod nvme_fabrics 00:18:53.820 rmmod nvme_keyring 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3461071 ']' 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3461071 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3461071 ']' 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3461071 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3461071 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3461071' 00:18:53.820 killing process with pid 3461071 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3461071 00:18:53.820 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3461071 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.719 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.624 00:18:57.624 real 0m10.965s 00:18:57.624 user 0m23.646s 00:18:57.624 sys 0m2.708s 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:57.624 ************************************ 00:18:57.624 END TEST nvmf_nvme_cli 00:18:57.624 ************************************ 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.624 ************************************ 00:18:57.624 START TEST nvmf_auth_target 00:18:57.624 ************************************ 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:57.624 * Looking for test storage... 00:18:57.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.624 --rc genhtml_branch_coverage=1 00:18:57.624 --rc genhtml_function_coverage=1 00:18:57.624 --rc genhtml_legend=1 00:18:57.624 --rc geninfo_all_blocks=1 00:18:57.624 --rc geninfo_unexecuted_blocks=1 00:18:57.624 00:18:57.624 ' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.624 --rc genhtml_branch_coverage=1 00:18:57.624 --rc genhtml_function_coverage=1 00:18:57.624 --rc genhtml_legend=1 00:18:57.624 --rc geninfo_all_blocks=1 00:18:57.624 --rc geninfo_unexecuted_blocks=1 00:18:57.624 00:18:57.624 ' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.624 --rc genhtml_branch_coverage=1 00:18:57.624 --rc genhtml_function_coverage=1 00:18:57.624 --rc genhtml_legend=1 00:18:57.624 --rc geninfo_all_blocks=1 00:18:57.624 --rc geninfo_unexecuted_blocks=1 00:18:57.624 00:18:57.624 ' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.624 --rc genhtml_branch_coverage=1 00:18:57.624 --rc genhtml_function_coverage=1 00:18:57.624 --rc genhtml_legend=1 00:18:57.624 --rc geninfo_all_blocks=1 00:18:57.624 --rc geninfo_unexecuted_blocks=1 00:18:57.624 00:18:57.624 ' 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.624 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.625 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:59.528 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.529 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:59.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:18:59.788 00:18:59.788 --- 10.0.0.2 ping statistics --- 00:18:59.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.788 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:18:59.788 00:18:59.788 --- 10.0.0.1 ping statistics --- 00:18:59.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.788 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3463846 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3463846 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3463846 ']' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.788 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3463998 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=645c838ab110dc77e5dea9713cb6b7815ff28b680d570c21 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cEs 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 645c838ab110dc77e5dea9713cb6b7815ff28b680d570c21 0 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 645c838ab110dc77e5dea9713cb6b7815ff28b680d570c21 0 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=645c838ab110dc77e5dea9713cb6b7815ff28b680d570c21 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:01.165 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cEs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cEs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cEs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4e01f774eb6e2a9efba0274659dbc6899cc9ad7cae3cf1432fb0140c6bac18d5 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pSs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4e01f774eb6e2a9efba0274659dbc6899cc9ad7cae3cf1432fb0140c6bac18d5 3 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4e01f774eb6e2a9efba0274659dbc6899cc9ad7cae3cf1432fb0140c6bac18d5 3 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4e01f774eb6e2a9efba0274659dbc6899cc9ad7cae3cf1432fb0140c6bac18d5 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pSs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pSs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.pSs 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:01.165 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ffca0dc876e020d3c12e0e77fd26adbd 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OcH 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ffca0dc876e020d3c12e0e77fd26adbd 1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ffca0dc876e020d3c12e0e77fd26adbd 1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ffca0dc876e020d3c12e0e77fd26adbd 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OcH 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OcH 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.OcH 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=46d92a6000b9312bc323cdff2b877ea876f8e9e9108ab95f 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.99N 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 46d92a6000b9312bc323cdff2b877ea876f8e9e9108ab95f 2 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 46d92a6000b9312bc323cdff2b877ea876f8e9e9108ab95f 2 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=46d92a6000b9312bc323cdff2b877ea876f8e9e9108ab95f 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.99N 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.99N 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.99N 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c05bf5fe010ac5623cbe3a9ba3ba3d185b1cf19465dead1e 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OAf 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c05bf5fe010ac5623cbe3a9ba3ba3d185b1cf19465dead1e 2 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c05bf5fe010ac5623cbe3a9ba3ba3d185b1cf19465dead1e 2 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c05bf5fe010ac5623cbe3a9ba3ba3d185b1cf19465dead1e 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OAf 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OAf 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.OAf 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f507f2e2b4d3aabf98de4d8e4a318d84 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TpB 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f507f2e2b4d3aabf98de4d8e4a318d84 1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f507f2e2b4d3aabf98de4d8e4a318d84 1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f507f2e2b4d3aabf98de4d8e4a318d84 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TpB 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TpB 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.TpB 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bade1db356b3f7ad7056da22d06dc3f67badc75b8a1e8af8709dcff7b75996a4 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xtn 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bade1db356b3f7ad7056da22d06dc3f67badc75b8a1e8af8709dcff7b75996a4 3 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bade1db356b3f7ad7056da22d06dc3f67badc75b8a1e8af8709dcff7b75996a4 3 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bade1db356b3f7ad7056da22d06dc3f67badc75b8a1e8af8709dcff7b75996a4 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xtn 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xtn 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.xtn 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3463846 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3463846 ']' 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.166 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3463998 /var/tmp/host.sock 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3463998 ']' 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:01.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.425 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cEs 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cEs 00:19:02.360 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cEs 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.pSs ]] 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pSs 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pSs 00:19:02.619 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pSs 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OcH 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OcH 00:19:02.877 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OcH 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.99N ]] 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.99N 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.99N 00:19:03.135 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.99N 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OAf 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OAf 00:19:03.393 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OAf 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.TpB ]] 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TpB 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TpB 00:19:03.651 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TpB 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xtn 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xtn 00:19:04.217 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xtn 00:19:04.475 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:04.475 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:04.475 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.475 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.475 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.475 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.734 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.992 00:19:04.992 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.992 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.992 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.251 { 00:19:05.251 "cntlid": 1, 00:19:05.251 "qid": 0, 00:19:05.251 "state": "enabled", 00:19:05.251 "thread": "nvmf_tgt_poll_group_000", 00:19:05.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:05.251 "listen_address": { 00:19:05.251 "trtype": "TCP", 00:19:05.251 "adrfam": "IPv4", 00:19:05.251 "traddr": "10.0.0.2", 00:19:05.251 "trsvcid": "4420" 00:19:05.251 }, 00:19:05.251 "peer_address": { 00:19:05.251 "trtype": "TCP", 00:19:05.251 "adrfam": "IPv4", 00:19:05.251 "traddr": "10.0.0.1", 00:19:05.251 "trsvcid": "60048" 00:19:05.251 }, 00:19:05.251 "auth": { 00:19:05.251 "state": "completed", 00:19:05.251 "digest": "sha256", 00:19:05.251 "dhgroup": "null" 00:19:05.251 } 00:19:05.251 } 00:19:05.251 ]' 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:05.251 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.508 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.509 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.509 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.767 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:05.767 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:06.755 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.013 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.272 00:19:07.272 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.272 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.272 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.530 { 00:19:07.530 "cntlid": 3, 00:19:07.530 "qid": 0, 00:19:07.530 "state": "enabled", 00:19:07.530 "thread": "nvmf_tgt_poll_group_000", 00:19:07.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:07.530 "listen_address": { 00:19:07.530 "trtype": "TCP", 00:19:07.530 "adrfam": "IPv4", 00:19:07.530 "traddr": "10.0.0.2", 00:19:07.530 "trsvcid": "4420" 00:19:07.530 }, 00:19:07.530 "peer_address": { 00:19:07.530 "trtype": "TCP", 00:19:07.530 "adrfam": "IPv4", 00:19:07.530 "traddr": "10.0.0.1", 00:19:07.530 "trsvcid": "60070" 00:19:07.530 }, 00:19:07.530 "auth": { 00:19:07.530 "state": "completed", 00:19:07.530 "digest": "sha256", 00:19:07.530 "dhgroup": "null" 00:19:07.530 } 00:19:07.530 } 00:19:07.530 ]' 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.530 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.788 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.788 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.788 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.046 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:08.046 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:08.981 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.981 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.239 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.497 00:19:09.497 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.497 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.497 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.756 { 00:19:09.756 "cntlid": 5, 00:19:09.756 "qid": 0, 00:19:09.756 "state": "enabled", 00:19:09.756 "thread": "nvmf_tgt_poll_group_000", 00:19:09.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.756 "listen_address": { 00:19:09.756 "trtype": "TCP", 00:19:09.756 "adrfam": "IPv4", 00:19:09.756 "traddr": "10.0.0.2", 00:19:09.756 "trsvcid": "4420" 00:19:09.756 }, 00:19:09.756 "peer_address": { 00:19:09.756 "trtype": "TCP", 00:19:09.756 "adrfam": "IPv4", 00:19:09.756 "traddr": "10.0.0.1", 00:19:09.756 "trsvcid": "60104" 00:19:09.756 }, 00:19:09.756 "auth": { 00:19:09.756 "state": "completed", 00:19:09.756 "digest": "sha256", 00:19:09.756 "dhgroup": "null" 00:19:09.756 } 00:19:09.756 } 00:19:09.756 ]' 00:19:09.756 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.013 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.013 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.013 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.014 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.014 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.014 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.014 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.271 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:10.271 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:11.204 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.205 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.461 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.025 00:19:12.025 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.025 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.025 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.282 { 00:19:12.282 "cntlid": 7, 00:19:12.282 "qid": 0, 00:19:12.282 "state": "enabled", 00:19:12.282 "thread": "nvmf_tgt_poll_group_000", 00:19:12.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:12.282 "listen_address": { 00:19:12.282 "trtype": "TCP", 00:19:12.282 "adrfam": "IPv4", 00:19:12.282 "traddr": "10.0.0.2", 00:19:12.282 "trsvcid": "4420" 00:19:12.282 }, 00:19:12.282 "peer_address": { 00:19:12.282 "trtype": "TCP", 00:19:12.282 "adrfam": "IPv4", 00:19:12.282 "traddr": "10.0.0.1", 00:19:12.282 "trsvcid": "60126" 00:19:12.282 }, 00:19:12.282 "auth": { 00:19:12.282 "state": "completed", 00:19:12.282 "digest": "sha256", 00:19:12.282 "dhgroup": "null" 00:19:12.282 } 00:19:12.282 } 00:19:12.282 ]' 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.282 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.540 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:12.540 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.474 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.039 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.297 00:19:14.297 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.297 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.297 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.555 { 00:19:14.555 "cntlid": 9, 00:19:14.555 "qid": 0, 00:19:14.555 "state": "enabled", 00:19:14.555 "thread": "nvmf_tgt_poll_group_000", 00:19:14.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:14.555 "listen_address": { 00:19:14.555 "trtype": "TCP", 00:19:14.555 "adrfam": "IPv4", 00:19:14.555 "traddr": "10.0.0.2", 00:19:14.555 "trsvcid": "4420" 00:19:14.555 }, 00:19:14.555 "peer_address": { 00:19:14.555 "trtype": "TCP", 00:19:14.555 "adrfam": "IPv4", 00:19:14.555 "traddr": "10.0.0.1", 00:19:14.555 "trsvcid": "56140" 00:19:14.555 }, 00:19:14.555 "auth": { 00:19:14.555 "state": "completed", 00:19:14.555 "digest": "sha256", 00:19:14.555 "dhgroup": "ffdhe2048" 00:19:14.555 } 00:19:14.555 } 00:19:14.555 ]' 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.555 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.121 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:15.121 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:16.054 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.054 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.054 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.054 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.054 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.054 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.054 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.055 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.313 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.571 00:19:16.571 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.571 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.571 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.830 { 00:19:16.830 "cntlid": 11, 00:19:16.830 "qid": 0, 00:19:16.830 "state": "enabled", 00:19:16.830 "thread": "nvmf_tgt_poll_group_000", 00:19:16.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.830 "listen_address": { 00:19:16.830 "trtype": "TCP", 00:19:16.830 "adrfam": "IPv4", 00:19:16.830 "traddr": "10.0.0.2", 00:19:16.830 "trsvcid": "4420" 00:19:16.830 }, 00:19:16.830 "peer_address": { 00:19:16.830 "trtype": "TCP", 00:19:16.830 "adrfam": "IPv4", 00:19:16.830 "traddr": "10.0.0.1", 00:19:16.830 "trsvcid": "56172" 00:19:16.830 }, 00:19:16.830 "auth": { 00:19:16.830 "state": "completed", 00:19:16.830 "digest": "sha256", 00:19:16.830 "dhgroup": "ffdhe2048" 00:19:16.830 } 00:19:16.830 } 00:19:16.830 ]' 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.830 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.088 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.088 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.088 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.346 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:17.346 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.280 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.537 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.795 00:19:19.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.311 { 00:19:19.311 "cntlid": 13, 00:19:19.311 "qid": 0, 00:19:19.311 "state": "enabled", 00:19:19.311 "thread": "nvmf_tgt_poll_group_000", 00:19:19.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.311 "listen_address": { 00:19:19.311 "trtype": "TCP", 00:19:19.311 "adrfam": "IPv4", 00:19:19.311 "traddr": "10.0.0.2", 00:19:19.311 "trsvcid": "4420" 00:19:19.311 }, 00:19:19.311 "peer_address": { 00:19:19.311 "trtype": "TCP", 00:19:19.311 "adrfam": "IPv4", 00:19:19.311 "traddr": "10.0.0.1", 00:19:19.311 "trsvcid": "56180" 00:19:19.311 }, 00:19:19.311 "auth": { 00:19:19.311 "state": "completed", 00:19:19.311 "digest": "sha256", 00:19:19.311 "dhgroup": "ffdhe2048" 00:19:19.311 } 00:19:19.311 } 00:19:19.311 ]' 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.311 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.569 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:19.569 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:20.943 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.943 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.510 00:19:21.510 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.510 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.510 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.768 { 00:19:21.768 "cntlid": 15, 00:19:21.768 "qid": 0, 00:19:21.768 "state": "enabled", 00:19:21.768 "thread": "nvmf_tgt_poll_group_000", 00:19:21.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.768 "listen_address": { 00:19:21.768 "trtype": "TCP", 00:19:21.768 "adrfam": "IPv4", 00:19:21.768 "traddr": "10.0.0.2", 00:19:21.768 "trsvcid": "4420" 00:19:21.768 }, 00:19:21.768 "peer_address": { 00:19:21.768 "trtype": "TCP", 00:19:21.768 "adrfam": "IPv4", 00:19:21.768 "traddr": "10.0.0.1", 00:19:21.768 "trsvcid": "56206" 00:19:21.768 }, 00:19:21.768 "auth": { 00:19:21.768 "state": "completed", 00:19:21.768 "digest": "sha256", 00:19:21.768 "dhgroup": "ffdhe2048" 00:19:21.768 } 00:19:21.768 } 00:19:21.768 ]' 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.768 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.027 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:22.027 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.960 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.218 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.219 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.783 00:19:23.783 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.783 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.783 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.041 { 00:19:24.041 "cntlid": 17, 00:19:24.041 "qid": 0, 00:19:24.041 "state": "enabled", 00:19:24.041 "thread": "nvmf_tgt_poll_group_000", 00:19:24.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:24.041 "listen_address": { 00:19:24.041 "trtype": "TCP", 00:19:24.041 "adrfam": "IPv4", 00:19:24.041 "traddr": "10.0.0.2", 00:19:24.041 "trsvcid": "4420" 00:19:24.041 }, 00:19:24.041 "peer_address": { 00:19:24.041 "trtype": "TCP", 00:19:24.041 "adrfam": "IPv4", 00:19:24.041 "traddr": "10.0.0.1", 00:19:24.041 "trsvcid": "56238" 00:19:24.041 }, 00:19:24.041 "auth": { 00:19:24.041 "state": "completed", 00:19:24.041 "digest": "sha256", 00:19:24.041 "dhgroup": "ffdhe3072" 00:19:24.041 } 00:19:24.041 } 00:19:24.041 ]' 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.041 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.299 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:24.299 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:25.232 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.233 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.799 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.057 00:19:26.057 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.057 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.057 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.315 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.315 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.315 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.315 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.315 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.315 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.315 { 00:19:26.315 "cntlid": 19, 00:19:26.315 "qid": 0, 00:19:26.315 "state": "enabled", 00:19:26.315 "thread": "nvmf_tgt_poll_group_000", 00:19:26.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.315 "listen_address": { 00:19:26.315 "trtype": "TCP", 00:19:26.315 "adrfam": "IPv4", 00:19:26.315 "traddr": "10.0.0.2", 00:19:26.315 "trsvcid": "4420" 00:19:26.315 }, 00:19:26.315 "peer_address": { 00:19:26.315 "trtype": "TCP", 00:19:26.315 "adrfam": "IPv4", 00:19:26.315 "traddr": "10.0.0.1", 00:19:26.315 "trsvcid": "42952" 00:19:26.315 }, 00:19:26.315 "auth": { 00:19:26.315 "state": "completed", 00:19:26.315 "digest": "sha256", 00:19:26.315 "dhgroup": "ffdhe3072" 00:19:26.315 } 00:19:26.315 } 00:19:26.315 ]' 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.316 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.881 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:26.881 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:27.815 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.073 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.331 00:19:28.331 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.331 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.331 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.589 { 00:19:28.589 "cntlid": 21, 00:19:28.589 "qid": 0, 00:19:28.589 "state": "enabled", 00:19:28.589 "thread": "nvmf_tgt_poll_group_000", 00:19:28.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.589 "listen_address": { 00:19:28.589 "trtype": "TCP", 00:19:28.589 "adrfam": "IPv4", 00:19:28.589 "traddr": "10.0.0.2", 00:19:28.589 "trsvcid": "4420" 00:19:28.589 }, 00:19:28.589 "peer_address": { 00:19:28.589 "trtype": "TCP", 00:19:28.589 "adrfam": "IPv4", 00:19:28.589 "traddr": "10.0.0.1", 00:19:28.589 "trsvcid": "42994" 00:19:28.589 }, 00:19:28.589 "auth": { 00:19:28.589 "state": "completed", 00:19:28.589 "digest": "sha256", 00:19:28.589 "dhgroup": "ffdhe3072" 00:19:28.589 } 00:19:28.589 } 00:19:28.589 ]' 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.589 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.847 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.847 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.847 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.847 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.847 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.105 23:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:29.105 23:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.038 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.296 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.297 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.297 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.297 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.862 00:19:30.862 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.862 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.862 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.862 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.862 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.862 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.862 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.862 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.862 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.862 { 00:19:30.862 "cntlid": 23, 00:19:30.862 "qid": 0, 00:19:30.862 "state": "enabled", 00:19:30.862 "thread": "nvmf_tgt_poll_group_000", 00:19:30.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.863 "listen_address": { 00:19:30.863 "trtype": "TCP", 00:19:30.863 "adrfam": "IPv4", 00:19:30.863 "traddr": "10.0.0.2", 00:19:30.863 "trsvcid": "4420" 00:19:30.863 }, 00:19:30.863 "peer_address": { 00:19:30.863 "trtype": "TCP", 00:19:30.863 "adrfam": "IPv4", 00:19:30.863 "traddr": "10.0.0.1", 00:19:30.863 "trsvcid": "43018" 00:19:30.863 }, 00:19:30.863 "auth": { 00:19:30.863 "state": "completed", 00:19:30.863 "digest": "sha256", 00:19:30.863 "dhgroup": "ffdhe3072" 00:19:30.863 } 00:19:30.863 } 00:19:30.863 ]' 00:19:30.863 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.120 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.378 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:31.378 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.311 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.569 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.134 00:19:33.134 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.134 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.134 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.392 { 00:19:33.392 "cntlid": 25, 00:19:33.392 "qid": 0, 00:19:33.392 "state": "enabled", 00:19:33.392 "thread": "nvmf_tgt_poll_group_000", 00:19:33.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.392 "listen_address": { 00:19:33.392 "trtype": "TCP", 00:19:33.392 "adrfam": "IPv4", 00:19:33.392 "traddr": "10.0.0.2", 00:19:33.392 "trsvcid": "4420" 00:19:33.392 }, 00:19:33.392 "peer_address": { 00:19:33.392 "trtype": "TCP", 00:19:33.392 "adrfam": "IPv4", 00:19:33.392 "traddr": "10.0.0.1", 00:19:33.392 "trsvcid": "43040" 00:19:33.392 }, 00:19:33.392 "auth": { 00:19:33.392 "state": "completed", 00:19:33.392 "digest": "sha256", 00:19:33.392 "dhgroup": "ffdhe4096" 00:19:33.392 } 00:19:33.392 } 00:19:33.392 ]' 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.392 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.653 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:33.653 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:34.587 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.153 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.154 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.411 00:19:35.411 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.411 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.411 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.670 { 00:19:35.670 "cntlid": 27, 00:19:35.670 "qid": 0, 00:19:35.670 "state": "enabled", 00:19:35.670 "thread": "nvmf_tgt_poll_group_000", 00:19:35.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.670 "listen_address": { 00:19:35.670 "trtype": "TCP", 00:19:35.670 "adrfam": "IPv4", 00:19:35.670 "traddr": "10.0.0.2", 00:19:35.670 "trsvcid": "4420" 00:19:35.670 }, 00:19:35.670 "peer_address": { 00:19:35.670 "trtype": "TCP", 00:19:35.670 "adrfam": "IPv4", 00:19:35.670 "traddr": "10.0.0.1", 00:19:35.670 "trsvcid": "48774" 00:19:35.670 }, 00:19:35.670 "auth": { 00:19:35.670 "state": "completed", 00:19:35.670 "digest": "sha256", 00:19:35.670 "dhgroup": "ffdhe4096" 00:19:35.670 } 00:19:35.670 } 00:19:35.670 ]' 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.670 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.928 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.928 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.928 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.187 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:36.187 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.120 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.378 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.947 00:19:37.947 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.947 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.947 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.246 { 00:19:38.246 "cntlid": 29, 00:19:38.246 "qid": 0, 00:19:38.246 "state": "enabled", 00:19:38.246 "thread": "nvmf_tgt_poll_group_000", 00:19:38.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.246 "listen_address": { 00:19:38.246 "trtype": "TCP", 00:19:38.246 "adrfam": "IPv4", 00:19:38.246 "traddr": "10.0.0.2", 00:19:38.246 "trsvcid": "4420" 00:19:38.246 }, 00:19:38.246 "peer_address": { 00:19:38.246 "trtype": "TCP", 00:19:38.246 "adrfam": "IPv4", 00:19:38.246 "traddr": "10.0.0.1", 00:19:38.246 "trsvcid": "48798" 00:19:38.246 }, 00:19:38.246 "auth": { 00:19:38.246 "state": "completed", 00:19:38.246 "digest": "sha256", 00:19:38.246 "dhgroup": "ffdhe4096" 00:19:38.246 } 00:19:38.246 } 00:19:38.246 ]' 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.246 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.531 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:38.531 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.465 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.723 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.289 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.289 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.547 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.547 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.547 { 00:19:40.547 "cntlid": 31, 00:19:40.547 "qid": 0, 00:19:40.547 "state": "enabled", 00:19:40.547 "thread": "nvmf_tgt_poll_group_000", 00:19:40.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.547 "listen_address": { 00:19:40.547 "trtype": "TCP", 00:19:40.547 "adrfam": "IPv4", 00:19:40.547 "traddr": "10.0.0.2", 00:19:40.547 "trsvcid": "4420" 00:19:40.547 }, 00:19:40.547 "peer_address": { 00:19:40.547 "trtype": "TCP", 00:19:40.547 "adrfam": "IPv4", 00:19:40.547 "traddr": "10.0.0.1", 00:19:40.547 "trsvcid": "48836" 00:19:40.547 }, 00:19:40.547 "auth": { 00:19:40.547 "state": "completed", 00:19:40.547 "digest": "sha256", 00:19:40.547 "dhgroup": "ffdhe4096" 00:19:40.547 } 00:19:40.547 } 00:19:40.547 ]' 00:19:40.547 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.548 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.806 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:40.806 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.739 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.306 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.564 00:19:42.822 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.822 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.822 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.080 { 00:19:43.080 "cntlid": 33, 00:19:43.080 "qid": 0, 00:19:43.080 "state": "enabled", 00:19:43.080 "thread": "nvmf_tgt_poll_group_000", 00:19:43.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.080 "listen_address": { 00:19:43.080 "trtype": "TCP", 00:19:43.080 "adrfam": "IPv4", 00:19:43.080 "traddr": "10.0.0.2", 00:19:43.080 "trsvcid": "4420" 00:19:43.080 }, 00:19:43.080 "peer_address": { 00:19:43.080 "trtype": "TCP", 00:19:43.080 "adrfam": "IPv4", 00:19:43.080 "traddr": "10.0.0.1", 00:19:43.080 "trsvcid": "48874" 00:19:43.080 }, 00:19:43.080 "auth": { 00:19:43.080 "state": "completed", 00:19:43.080 "digest": "sha256", 00:19:43.080 "dhgroup": "ffdhe6144" 00:19:43.080 } 00:19:43.080 } 00:19:43.080 ]' 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.080 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.338 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:43.338 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.714 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.715 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.281 00:19:45.281 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.281 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.281 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.538 { 00:19:45.538 "cntlid": 35, 00:19:45.538 "qid": 0, 00:19:45.538 "state": "enabled", 00:19:45.538 "thread": "nvmf_tgt_poll_group_000", 00:19:45.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.538 "listen_address": { 00:19:45.538 "trtype": "TCP", 00:19:45.538 "adrfam": "IPv4", 00:19:45.538 "traddr": "10.0.0.2", 00:19:45.538 "trsvcid": "4420" 00:19:45.538 }, 00:19:45.538 "peer_address": { 00:19:45.538 "trtype": "TCP", 00:19:45.538 "adrfam": "IPv4", 00:19:45.538 "traddr": "10.0.0.1", 00:19:45.538 "trsvcid": "49366" 00:19:45.538 }, 00:19:45.538 "auth": { 00:19:45.538 "state": "completed", 00:19:45.538 "digest": "sha256", 00:19:45.538 "dhgroup": "ffdhe6144" 00:19:45.538 } 00:19:45.538 } 00:19:45.538 ]' 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.538 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.539 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.539 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.796 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.796 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.796 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.054 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:46.054 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.988 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.245 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.246 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.812 00:19:47.812 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.812 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.812 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.071 { 00:19:48.071 "cntlid": 37, 00:19:48.071 "qid": 0, 00:19:48.071 "state": "enabled", 00:19:48.071 "thread": "nvmf_tgt_poll_group_000", 00:19:48.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.071 "listen_address": { 00:19:48.071 "trtype": "TCP", 00:19:48.071 "adrfam": "IPv4", 00:19:48.071 "traddr": "10.0.0.2", 00:19:48.071 "trsvcid": "4420" 00:19:48.071 }, 00:19:48.071 "peer_address": { 00:19:48.071 "trtype": "TCP", 00:19:48.071 "adrfam": "IPv4", 00:19:48.071 "traddr": "10.0.0.1", 00:19:48.071 "trsvcid": "49394" 00:19:48.071 }, 00:19:48.071 "auth": { 00:19:48.071 "state": "completed", 00:19:48.071 "digest": "sha256", 00:19:48.071 "dhgroup": "ffdhe6144" 00:19:48.071 } 00:19:48.071 } 00:19:48.071 ]' 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.071 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.637 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:48.637 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.570 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.829 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.394 00:19:50.394 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.394 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.394 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.653 { 00:19:50.653 "cntlid": 39, 00:19:50.653 "qid": 0, 00:19:50.653 "state": "enabled", 00:19:50.653 "thread": "nvmf_tgt_poll_group_000", 00:19:50.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.653 "listen_address": { 00:19:50.653 "trtype": "TCP", 00:19:50.653 "adrfam": "IPv4", 00:19:50.653 "traddr": "10.0.0.2", 00:19:50.653 "trsvcid": "4420" 00:19:50.653 }, 00:19:50.653 "peer_address": { 00:19:50.653 "trtype": "TCP", 00:19:50.653 "adrfam": "IPv4", 00:19:50.653 "traddr": "10.0.0.1", 00:19:50.653 "trsvcid": "49424" 00:19:50.653 }, 00:19:50.653 "auth": { 00:19:50.653 "state": "completed", 00:19:50.653 "digest": "sha256", 00:19:50.653 "dhgroup": "ffdhe6144" 00:19:50.653 } 00:19:50.653 } 00:19:50.653 ]' 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.653 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.911 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.911 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.911 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.911 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.911 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.170 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:51.170 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.104 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.362 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.296 00:19:53.296 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.296 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.296 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.553 { 00:19:53.553 "cntlid": 41, 00:19:53.553 "qid": 0, 00:19:53.553 "state": "enabled", 00:19:53.553 "thread": "nvmf_tgt_poll_group_000", 00:19:53.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.553 "listen_address": { 00:19:53.553 "trtype": "TCP", 00:19:53.553 "adrfam": "IPv4", 00:19:53.553 "traddr": "10.0.0.2", 00:19:53.553 "trsvcid": "4420" 00:19:53.553 }, 00:19:53.553 "peer_address": { 00:19:53.553 "trtype": "TCP", 00:19:53.553 "adrfam": "IPv4", 00:19:53.553 "traddr": "10.0.0.1", 00:19:53.553 "trsvcid": "49452" 00:19:53.553 }, 00:19:53.553 "auth": { 00:19:53.553 "state": "completed", 00:19:53.553 "digest": "sha256", 00:19:53.553 "dhgroup": "ffdhe8192" 00:19:53.553 } 00:19:53.553 } 00:19:53.553 ]' 00:19:53.553 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.554 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.554 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.812 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.812 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.812 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.812 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.812 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.070 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:54.070 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.002 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.260 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.518 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.518 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.518 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.518 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.452 00:19:56.452 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.452 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.452 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.710 { 00:19:56.710 "cntlid": 43, 00:19:56.710 "qid": 0, 00:19:56.710 "state": "enabled", 00:19:56.710 "thread": "nvmf_tgt_poll_group_000", 00:19:56.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.710 "listen_address": { 00:19:56.710 "trtype": "TCP", 00:19:56.710 "adrfam": "IPv4", 00:19:56.710 "traddr": "10.0.0.2", 00:19:56.710 "trsvcid": "4420" 00:19:56.710 }, 00:19:56.710 "peer_address": { 00:19:56.710 "trtype": "TCP", 00:19:56.710 "adrfam": "IPv4", 00:19:56.710 "traddr": "10.0.0.1", 00:19:56.710 "trsvcid": "50486" 00:19:56.710 }, 00:19:56.710 "auth": { 00:19:56.710 "state": "completed", 00:19:56.710 "digest": "sha256", 00:19:56.710 "dhgroup": "ffdhe8192" 00:19:56.710 } 00:19:56.710 } 00:19:56.710 ]' 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.710 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.968 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:56.968 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.904 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.472 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.406 00:19:59.406 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.406 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.406 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.664 { 00:19:59.664 "cntlid": 45, 00:19:59.664 "qid": 0, 00:19:59.664 "state": "enabled", 00:19:59.664 "thread": "nvmf_tgt_poll_group_000", 00:19:59.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.664 "listen_address": { 00:19:59.664 "trtype": "TCP", 00:19:59.664 "adrfam": "IPv4", 00:19:59.664 "traddr": "10.0.0.2", 00:19:59.664 "trsvcid": "4420" 00:19:59.664 }, 00:19:59.664 "peer_address": { 00:19:59.664 "trtype": "TCP", 00:19:59.664 "adrfam": "IPv4", 00:19:59.664 "traddr": "10.0.0.1", 00:19:59.664 "trsvcid": "50518" 00:19:59.664 }, 00:19:59.664 "auth": { 00:19:59.664 "state": "completed", 00:19:59.664 "digest": "sha256", 00:19:59.664 "dhgroup": "ffdhe8192" 00:19:59.664 } 00:19:59.664 } 00:19:59.664 ]' 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.664 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.922 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:19:59.922 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:01.295 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.296 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.228 00:20:02.228 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.228 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.228 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.486 { 00:20:02.486 "cntlid": 47, 00:20:02.486 "qid": 0, 00:20:02.486 "state": "enabled", 00:20:02.486 "thread": "nvmf_tgt_poll_group_000", 00:20:02.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.486 "listen_address": { 00:20:02.486 "trtype": "TCP", 00:20:02.486 "adrfam": "IPv4", 00:20:02.486 "traddr": "10.0.0.2", 00:20:02.486 "trsvcid": "4420" 00:20:02.486 }, 00:20:02.486 "peer_address": { 00:20:02.486 "trtype": "TCP", 00:20:02.486 "adrfam": "IPv4", 00:20:02.486 "traddr": "10.0.0.1", 00:20:02.486 "trsvcid": "50554" 00:20:02.486 }, 00:20:02.486 "auth": { 00:20:02.486 "state": "completed", 00:20:02.486 "digest": "sha256", 00:20:02.486 "dhgroup": "ffdhe8192" 00:20:02.486 } 00:20:02.486 } 00:20:02.486 ]' 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.486 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.744 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.744 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.744 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.003 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:03.003 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.934 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.192 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.449 00:20:04.449 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.449 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.449 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.708 { 00:20:04.708 "cntlid": 49, 00:20:04.708 "qid": 0, 00:20:04.708 "state": "enabled", 00:20:04.708 "thread": "nvmf_tgt_poll_group_000", 00:20:04.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.708 "listen_address": { 00:20:04.708 "trtype": "TCP", 00:20:04.708 "adrfam": "IPv4", 00:20:04.708 "traddr": "10.0.0.2", 00:20:04.708 "trsvcid": "4420" 00:20:04.708 }, 00:20:04.708 "peer_address": { 00:20:04.708 "trtype": "TCP", 00:20:04.708 "adrfam": "IPv4", 00:20:04.708 "traddr": "10.0.0.1", 00:20:04.708 "trsvcid": "55054" 00:20:04.708 }, 00:20:04.708 "auth": { 00:20:04.708 "state": "completed", 00:20:04.708 "digest": "sha384", 00:20:04.708 "dhgroup": "null" 00:20:04.708 } 00:20:04.708 } 00:20:04.708 ]' 00:20:04.708 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.966 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.966 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.966 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.966 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.966 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.966 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.966 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.224 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:05.224 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.157 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.415 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.673 00:20:06.931 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.931 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.931 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.189 { 00:20:07.189 "cntlid": 51, 00:20:07.189 "qid": 0, 00:20:07.189 "state": "enabled", 00:20:07.189 "thread": "nvmf_tgt_poll_group_000", 00:20:07.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.189 "listen_address": { 00:20:07.189 "trtype": "TCP", 00:20:07.189 "adrfam": "IPv4", 00:20:07.189 "traddr": "10.0.0.2", 00:20:07.189 "trsvcid": "4420" 00:20:07.189 }, 00:20:07.189 "peer_address": { 00:20:07.189 "trtype": "TCP", 00:20:07.189 "adrfam": "IPv4", 00:20:07.189 "traddr": "10.0.0.1", 00:20:07.189 "trsvcid": "55062" 00:20:07.189 }, 00:20:07.189 "auth": { 00:20:07.189 "state": "completed", 00:20:07.189 "digest": "sha384", 00:20:07.189 "dhgroup": "null" 00:20:07.189 } 00:20:07.189 } 00:20:07.189 ]' 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.189 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.447 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:07.447 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.439 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.005 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.006 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.006 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.264 00:20:09.264 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.264 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.264 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.522 { 00:20:09.522 "cntlid": 53, 00:20:09.522 "qid": 0, 00:20:09.522 "state": "enabled", 00:20:09.522 "thread": "nvmf_tgt_poll_group_000", 00:20:09.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.522 "listen_address": { 00:20:09.522 "trtype": "TCP", 00:20:09.522 "adrfam": "IPv4", 00:20:09.522 "traddr": "10.0.0.2", 00:20:09.522 "trsvcid": "4420" 00:20:09.522 }, 00:20:09.522 "peer_address": { 00:20:09.522 "trtype": "TCP", 00:20:09.522 "adrfam": "IPv4", 00:20:09.522 "traddr": "10.0.0.1", 00:20:09.522 "trsvcid": "55080" 00:20:09.522 }, 00:20:09.522 "auth": { 00:20:09.522 "state": "completed", 00:20:09.522 "digest": "sha384", 00:20:09.522 "dhgroup": "null" 00:20:09.522 } 00:20:09.522 } 00:20:09.522 ]' 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.522 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.781 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:09.781 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.154 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.154 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.413 00:20:11.413 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.413 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.413 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.670 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.670 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.670 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.670 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.929 { 00:20:11.929 "cntlid": 55, 00:20:11.929 "qid": 0, 00:20:11.929 "state": "enabled", 00:20:11.929 "thread": "nvmf_tgt_poll_group_000", 00:20:11.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.929 "listen_address": { 00:20:11.929 "trtype": "TCP", 00:20:11.929 "adrfam": "IPv4", 00:20:11.929 "traddr": "10.0.0.2", 00:20:11.929 "trsvcid": "4420" 00:20:11.929 }, 00:20:11.929 "peer_address": { 00:20:11.929 "trtype": "TCP", 00:20:11.929 "adrfam": "IPv4", 00:20:11.929 "traddr": "10.0.0.1", 00:20:11.929 "trsvcid": "55102" 00:20:11.929 }, 00:20:11.929 "auth": { 00:20:11.929 "state": "completed", 00:20:11.929 "digest": "sha384", 00:20:11.929 "dhgroup": "null" 00:20:11.929 } 00:20:11.929 } 00:20:11.929 ]' 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.929 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.187 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:12.187 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.121 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.379 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.636 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.636 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.636 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.636 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.894 00:20:13.894 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.894 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.894 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.152 { 00:20:14.152 "cntlid": 57, 00:20:14.152 "qid": 0, 00:20:14.152 "state": "enabled", 00:20:14.152 "thread": "nvmf_tgt_poll_group_000", 00:20:14.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.152 "listen_address": { 00:20:14.152 "trtype": "TCP", 00:20:14.152 "adrfam": "IPv4", 00:20:14.152 "traddr": "10.0.0.2", 00:20:14.152 "trsvcid": "4420" 00:20:14.152 }, 00:20:14.152 "peer_address": { 00:20:14.152 "trtype": "TCP", 00:20:14.152 "adrfam": "IPv4", 00:20:14.152 "traddr": "10.0.0.1", 00:20:14.152 "trsvcid": "43256" 00:20:14.152 }, 00:20:14.152 "auth": { 00:20:14.152 "state": "completed", 00:20:14.152 "digest": "sha384", 00:20:14.152 "dhgroup": "ffdhe2048" 00:20:14.152 } 00:20:14.152 } 00:20:14.152 ]' 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.152 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.717 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:14.717 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.651 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.909 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.167 00:20:16.167 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.167 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.167 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.425 { 00:20:16.425 "cntlid": 59, 00:20:16.425 "qid": 0, 00:20:16.425 "state": "enabled", 00:20:16.425 "thread": "nvmf_tgt_poll_group_000", 00:20:16.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.425 "listen_address": { 00:20:16.425 "trtype": "TCP", 00:20:16.425 "adrfam": "IPv4", 00:20:16.425 "traddr": "10.0.0.2", 00:20:16.425 "trsvcid": "4420" 00:20:16.425 }, 00:20:16.425 "peer_address": { 00:20:16.425 "trtype": "TCP", 00:20:16.425 "adrfam": "IPv4", 00:20:16.425 "traddr": "10.0.0.1", 00:20:16.425 "trsvcid": "43280" 00:20:16.425 }, 00:20:16.425 "auth": { 00:20:16.425 "state": "completed", 00:20:16.425 "digest": "sha384", 00:20:16.425 "dhgroup": "ffdhe2048" 00:20:16.425 } 00:20:16.425 } 00:20:16.425 ]' 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.425 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.991 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:16.991 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.925 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.183 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.441 00:20:18.441 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.441 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.441 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.700 { 00:20:18.700 "cntlid": 61, 00:20:18.700 "qid": 0, 00:20:18.700 "state": "enabled", 00:20:18.700 "thread": "nvmf_tgt_poll_group_000", 00:20:18.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.700 "listen_address": { 00:20:18.700 "trtype": "TCP", 00:20:18.700 "adrfam": "IPv4", 00:20:18.700 "traddr": "10.0.0.2", 00:20:18.700 "trsvcid": "4420" 00:20:18.700 }, 00:20:18.700 "peer_address": { 00:20:18.700 "trtype": "TCP", 00:20:18.700 "adrfam": "IPv4", 00:20:18.700 "traddr": "10.0.0.1", 00:20:18.700 "trsvcid": "43300" 00:20:18.700 }, 00:20:18.700 "auth": { 00:20:18.700 "state": "completed", 00:20:18.700 "digest": "sha384", 00:20:18.700 "dhgroup": "ffdhe2048" 00:20:18.700 } 00:20:18.700 } 00:20:18.700 ]' 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.700 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.958 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.958 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.958 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.216 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:19.216 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.149 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.407 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.665 00:20:20.923 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.923 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.923 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.182 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.182 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.182 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.182 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.182 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.182 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.182 { 00:20:21.182 "cntlid": 63, 00:20:21.182 "qid": 0, 00:20:21.182 "state": "enabled", 00:20:21.183 "thread": "nvmf_tgt_poll_group_000", 00:20:21.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.183 "listen_address": { 00:20:21.183 "trtype": "TCP", 00:20:21.183 "adrfam": "IPv4", 00:20:21.183 "traddr": "10.0.0.2", 00:20:21.183 "trsvcid": "4420" 00:20:21.183 }, 00:20:21.183 "peer_address": { 00:20:21.183 "trtype": "TCP", 00:20:21.183 "adrfam": "IPv4", 00:20:21.183 "traddr": "10.0.0.1", 00:20:21.183 "trsvcid": "43334" 00:20:21.183 }, 00:20:21.183 "auth": { 00:20:21.183 "state": "completed", 00:20:21.183 "digest": "sha384", 00:20:21.183 "dhgroup": "ffdhe2048" 00:20:21.183 } 00:20:21.183 } 00:20:21.183 ]' 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.183 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.441 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:21.441 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.375 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.376 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.634 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.635 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.635 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.635 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.635 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.200 00:20:23.200 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.200 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.201 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.458 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.459 { 00:20:23.459 "cntlid": 65, 00:20:23.459 "qid": 0, 00:20:23.459 "state": "enabled", 00:20:23.459 "thread": "nvmf_tgt_poll_group_000", 00:20:23.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.459 "listen_address": { 00:20:23.459 "trtype": "TCP", 00:20:23.459 "adrfam": "IPv4", 00:20:23.459 "traddr": "10.0.0.2", 00:20:23.459 "trsvcid": "4420" 00:20:23.459 }, 00:20:23.459 "peer_address": { 00:20:23.459 "trtype": "TCP", 00:20:23.459 "adrfam": "IPv4", 00:20:23.459 "traddr": "10.0.0.1", 00:20:23.459 "trsvcid": "43354" 00:20:23.459 }, 00:20:23.459 "auth": { 00:20:23.459 "state": "completed", 00:20:23.459 "digest": "sha384", 00:20:23.459 "dhgroup": "ffdhe3072" 00:20:23.459 } 00:20:23.459 } 00:20:23.459 ]' 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.459 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.716 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:23.716 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:24.649 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.907 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.166 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.424 00:20:25.424 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.424 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.424 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.683 { 00:20:25.683 "cntlid": 67, 00:20:25.683 "qid": 0, 00:20:25.683 "state": "enabled", 00:20:25.683 "thread": "nvmf_tgt_poll_group_000", 00:20:25.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.683 "listen_address": { 00:20:25.683 "trtype": "TCP", 00:20:25.683 "adrfam": "IPv4", 00:20:25.683 "traddr": "10.0.0.2", 00:20:25.683 "trsvcid": "4420" 00:20:25.683 }, 00:20:25.683 "peer_address": { 00:20:25.683 "trtype": "TCP", 00:20:25.683 "adrfam": "IPv4", 00:20:25.683 "traddr": "10.0.0.1", 00:20:25.683 "trsvcid": "35500" 00:20:25.683 }, 00:20:25.683 "auth": { 00:20:25.683 "state": "completed", 00:20:25.683 "digest": "sha384", 00:20:25.683 "dhgroup": "ffdhe3072" 00:20:25.683 } 00:20:25.683 } 00:20:25.683 ]' 00:20:25.683 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.941 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.199 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:26.199 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.132 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.956 00:20:27.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.214 { 00:20:28.214 "cntlid": 69, 00:20:28.214 "qid": 0, 00:20:28.214 "state": "enabled", 00:20:28.214 "thread": "nvmf_tgt_poll_group_000", 00:20:28.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.214 "listen_address": { 00:20:28.214 "trtype": "TCP", 00:20:28.214 "adrfam": "IPv4", 00:20:28.214 "traddr": "10.0.0.2", 00:20:28.214 "trsvcid": "4420" 00:20:28.214 }, 00:20:28.214 "peer_address": { 00:20:28.214 "trtype": "TCP", 00:20:28.214 "adrfam": "IPv4", 00:20:28.214 "traddr": "10.0.0.1", 00:20:28.214 "trsvcid": "35512" 00:20:28.214 }, 00:20:28.214 "auth": { 00:20:28.214 "state": "completed", 00:20:28.214 "digest": "sha384", 00:20:28.214 "dhgroup": "ffdhe3072" 00:20:28.214 } 00:20:28.214 } 00:20:28.214 ]' 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.214 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.780 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:28.780 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.715 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.973 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.231 00:20:30.231 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.231 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.231 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.489 { 00:20:30.489 "cntlid": 71, 00:20:30.489 "qid": 0, 00:20:30.489 "state": "enabled", 00:20:30.489 "thread": "nvmf_tgt_poll_group_000", 00:20:30.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.489 "listen_address": { 00:20:30.489 "trtype": "TCP", 00:20:30.489 "adrfam": "IPv4", 00:20:30.489 "traddr": "10.0.0.2", 00:20:30.489 "trsvcid": "4420" 00:20:30.489 }, 00:20:30.489 "peer_address": { 00:20:30.489 "trtype": "TCP", 00:20:30.489 "adrfam": "IPv4", 00:20:30.489 "traddr": "10.0.0.1", 00:20:30.489 "trsvcid": "35550" 00:20:30.489 }, 00:20:30.489 "auth": { 00:20:30.489 "state": "completed", 00:20:30.489 "digest": "sha384", 00:20:30.489 "dhgroup": "ffdhe3072" 00:20:30.489 } 00:20:30.489 } 00:20:30.489 ]' 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.489 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.747 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.747 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.747 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.004 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:31.004 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.938 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.196 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.454 00:20:32.454 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.454 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.454 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.018 { 00:20:33.018 "cntlid": 73, 00:20:33.018 "qid": 0, 00:20:33.018 "state": "enabled", 00:20:33.018 "thread": "nvmf_tgt_poll_group_000", 00:20:33.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.018 "listen_address": { 00:20:33.018 "trtype": "TCP", 00:20:33.018 "adrfam": "IPv4", 00:20:33.018 "traddr": "10.0.0.2", 00:20:33.018 "trsvcid": "4420" 00:20:33.018 }, 00:20:33.018 "peer_address": { 00:20:33.018 "trtype": "TCP", 00:20:33.018 "adrfam": "IPv4", 00:20:33.018 "traddr": "10.0.0.1", 00:20:33.018 "trsvcid": "35590" 00:20:33.018 }, 00:20:33.018 "auth": { 00:20:33.018 "state": "completed", 00:20:33.018 "digest": "sha384", 00:20:33.018 "dhgroup": "ffdhe4096" 00:20:33.018 } 00:20:33.018 } 00:20:33.018 ]' 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.018 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.018 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.018 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.018 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.018 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.018 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.275 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:33.275 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.208 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.467 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.033 00:20:35.033 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.033 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.033 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.291 { 00:20:35.291 "cntlid": 75, 00:20:35.291 "qid": 0, 00:20:35.291 "state": "enabled", 00:20:35.291 "thread": "nvmf_tgt_poll_group_000", 00:20:35.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.291 "listen_address": { 00:20:35.291 "trtype": "TCP", 00:20:35.291 "adrfam": "IPv4", 00:20:35.291 "traddr": "10.0.0.2", 00:20:35.291 "trsvcid": "4420" 00:20:35.291 }, 00:20:35.291 "peer_address": { 00:20:35.291 "trtype": "TCP", 00:20:35.291 "adrfam": "IPv4", 00:20:35.291 "traddr": "10.0.0.1", 00:20:35.291 "trsvcid": "42818" 00:20:35.291 }, 00:20:35.291 "auth": { 00:20:35.291 "state": "completed", 00:20:35.291 "digest": "sha384", 00:20:35.291 "dhgroup": "ffdhe4096" 00:20:35.291 } 00:20:35.291 } 00:20:35.291 ]' 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.291 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.857 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:35.857 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.791 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.049 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:37.049 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.049 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.049 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.307 00:20:37.307 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.307 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.307 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.565 { 00:20:37.565 "cntlid": 77, 00:20:37.565 "qid": 0, 00:20:37.565 "state": "enabled", 00:20:37.565 "thread": "nvmf_tgt_poll_group_000", 00:20:37.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.565 "listen_address": { 00:20:37.565 "trtype": "TCP", 00:20:37.565 "adrfam": "IPv4", 00:20:37.565 "traddr": "10.0.0.2", 00:20:37.565 "trsvcid": "4420" 00:20:37.565 }, 00:20:37.565 "peer_address": { 00:20:37.565 "trtype": "TCP", 00:20:37.565 "adrfam": "IPv4", 00:20:37.565 "traddr": "10.0.0.1", 00:20:37.565 "trsvcid": "42836" 00:20:37.565 }, 00:20:37.565 "auth": { 00:20:37.565 "state": "completed", 00:20:37.565 "digest": "sha384", 00:20:37.565 "dhgroup": "ffdhe4096" 00:20:37.565 } 00:20:37.565 } 00:20:37.565 ]' 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.565 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.823 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.823 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.823 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.823 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.823 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.107 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:38.107 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:39.068 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.069 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.327 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.892 00:20:39.892 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.892 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.892 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.150 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.150 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.150 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.150 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.150 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.150 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.150 { 00:20:40.150 "cntlid": 79, 00:20:40.150 "qid": 0, 00:20:40.150 "state": "enabled", 00:20:40.150 "thread": "nvmf_tgt_poll_group_000", 00:20:40.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.150 "listen_address": { 00:20:40.150 "trtype": "TCP", 00:20:40.151 "adrfam": "IPv4", 00:20:40.151 "traddr": "10.0.0.2", 00:20:40.151 "trsvcid": "4420" 00:20:40.151 }, 00:20:40.151 "peer_address": { 00:20:40.151 "trtype": "TCP", 00:20:40.151 "adrfam": "IPv4", 00:20:40.151 "traddr": "10.0.0.1", 00:20:40.151 "trsvcid": "42860" 00:20:40.151 }, 00:20:40.151 "auth": { 00:20:40.151 "state": "completed", 00:20:40.151 "digest": "sha384", 00:20:40.151 "dhgroup": "ffdhe4096" 00:20:40.151 } 00:20:40.151 } 00:20:40.151 ]' 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.151 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.409 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:40.409 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.343 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.601 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.167 00:20:42.167 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.167 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.167 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.733 { 00:20:42.733 "cntlid": 81, 00:20:42.733 "qid": 0, 00:20:42.733 "state": "enabled", 00:20:42.733 "thread": "nvmf_tgt_poll_group_000", 00:20:42.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.733 "listen_address": { 00:20:42.733 "trtype": "TCP", 00:20:42.733 "adrfam": "IPv4", 00:20:42.733 "traddr": "10.0.0.2", 00:20:42.733 "trsvcid": "4420" 00:20:42.733 }, 00:20:42.733 "peer_address": { 00:20:42.733 "trtype": "TCP", 00:20:42.733 "adrfam": "IPv4", 00:20:42.733 "traddr": "10.0.0.1", 00:20:42.733 "trsvcid": "42894" 00:20:42.733 }, 00:20:42.733 "auth": { 00:20:42.733 "state": "completed", 00:20:42.733 "digest": "sha384", 00:20:42.733 "dhgroup": "ffdhe6144" 00:20:42.733 } 00:20:42.733 } 00:20:42.733 ]' 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.733 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.992 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:42.992 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.929 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.186 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.752 00:20:44.752 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.752 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.752 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.010 { 00:20:45.010 "cntlid": 83, 00:20:45.010 "qid": 0, 00:20:45.010 "state": "enabled", 00:20:45.010 "thread": "nvmf_tgt_poll_group_000", 00:20:45.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.010 "listen_address": { 00:20:45.010 "trtype": "TCP", 00:20:45.010 "adrfam": "IPv4", 00:20:45.010 "traddr": "10.0.0.2", 00:20:45.010 "trsvcid": "4420" 00:20:45.010 }, 00:20:45.010 "peer_address": { 00:20:45.010 "trtype": "TCP", 00:20:45.010 "adrfam": "IPv4", 00:20:45.010 "traddr": "10.0.0.1", 00:20:45.010 "trsvcid": "41378" 00:20:45.010 }, 00:20:45.010 "auth": { 00:20:45.010 "state": "completed", 00:20:45.010 "digest": "sha384", 00:20:45.010 "dhgroup": "ffdhe6144" 00:20:45.010 } 00:20:45.010 } 00:20:45.010 ]' 00:20:45.010 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.271 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.532 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:45.532 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.466 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.725 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.290 00:20:47.290 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.290 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.290 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.548 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.549 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.549 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.549 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.807 { 00:20:47.807 "cntlid": 85, 00:20:47.807 "qid": 0, 00:20:47.807 "state": "enabled", 00:20:47.807 "thread": "nvmf_tgt_poll_group_000", 00:20:47.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.807 "listen_address": { 00:20:47.807 "trtype": "TCP", 00:20:47.807 "adrfam": "IPv4", 00:20:47.807 "traddr": "10.0.0.2", 00:20:47.807 "trsvcid": "4420" 00:20:47.807 }, 00:20:47.807 "peer_address": { 00:20:47.807 "trtype": "TCP", 00:20:47.807 "adrfam": "IPv4", 00:20:47.807 "traddr": "10.0.0.1", 00:20:47.807 "trsvcid": "41388" 00:20:47.807 }, 00:20:47.807 "auth": { 00:20:47.807 "state": "completed", 00:20:47.807 "digest": "sha384", 00:20:47.807 "dhgroup": "ffdhe6144" 00:20:47.807 } 00:20:47.807 } 00:20:47.807 ]' 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.807 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.065 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:48.065 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.998 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.257 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.824 00:20:49.824 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.824 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.824 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.082 { 00:20:50.082 "cntlid": 87, 00:20:50.082 "qid": 0, 00:20:50.082 "state": "enabled", 00:20:50.082 "thread": "nvmf_tgt_poll_group_000", 00:20:50.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.082 "listen_address": { 00:20:50.082 "trtype": "TCP", 00:20:50.082 "adrfam": "IPv4", 00:20:50.082 "traddr": "10.0.0.2", 00:20:50.082 "trsvcid": "4420" 00:20:50.082 }, 00:20:50.082 "peer_address": { 00:20:50.082 "trtype": "TCP", 00:20:50.082 "adrfam": "IPv4", 00:20:50.082 "traddr": "10.0.0.1", 00:20:50.082 "trsvcid": "41424" 00:20:50.082 }, 00:20:50.082 "auth": { 00:20:50.082 "state": "completed", 00:20:50.082 "digest": "sha384", 00:20:50.082 "dhgroup": "ffdhe6144" 00:20:50.082 } 00:20:50.082 } 00:20:50.082 ]' 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.082 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.340 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.340 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.340 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.340 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.340 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.598 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:50.598 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:20:51.531 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.532 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.790 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.724 00:20:52.724 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.724 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.724 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.983 { 00:20:52.983 "cntlid": 89, 00:20:52.983 "qid": 0, 00:20:52.983 "state": "enabled", 00:20:52.983 "thread": "nvmf_tgt_poll_group_000", 00:20:52.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.983 "listen_address": { 00:20:52.983 "trtype": "TCP", 00:20:52.983 "adrfam": "IPv4", 00:20:52.983 "traddr": "10.0.0.2", 00:20:52.983 "trsvcid": "4420" 00:20:52.983 }, 00:20:52.983 "peer_address": { 00:20:52.983 "trtype": "TCP", 00:20:52.983 "adrfam": "IPv4", 00:20:52.983 "traddr": "10.0.0.1", 00:20:52.983 "trsvcid": "41460" 00:20:52.983 }, 00:20:52.983 "auth": { 00:20:52.983 "state": "completed", 00:20:52.983 "digest": "sha384", 00:20:52.983 "dhgroup": "ffdhe8192" 00:20:52.983 } 00:20:52.983 } 00:20:52.983 ]' 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.983 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.241 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.241 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.241 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.500 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:53.500 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.432 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.690 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.623 00:20:55.623 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.623 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.623 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.881 { 00:20:55.881 "cntlid": 91, 00:20:55.881 "qid": 0, 00:20:55.881 "state": "enabled", 00:20:55.881 "thread": "nvmf_tgt_poll_group_000", 00:20:55.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.881 "listen_address": { 00:20:55.881 "trtype": "TCP", 00:20:55.881 "adrfam": "IPv4", 00:20:55.881 "traddr": "10.0.0.2", 00:20:55.881 "trsvcid": "4420" 00:20:55.881 }, 00:20:55.881 "peer_address": { 00:20:55.881 "trtype": "TCP", 00:20:55.881 "adrfam": "IPv4", 00:20:55.881 "traddr": "10.0.0.1", 00:20:55.881 "trsvcid": "40892" 00:20:55.881 }, 00:20:55.881 "auth": { 00:20:55.881 "state": "completed", 00:20:55.881 "digest": "sha384", 00:20:55.881 "dhgroup": "ffdhe8192" 00:20:55.881 } 00:20:55.881 } 00:20:55.881 ]' 00:20:55.881 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.881 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.447 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:56.447 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.381 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.639 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.572 00:20:58.572 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.572 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.572 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.831 { 00:20:58.831 "cntlid": 93, 00:20:58.831 "qid": 0, 00:20:58.831 "state": "enabled", 00:20:58.831 "thread": "nvmf_tgt_poll_group_000", 00:20:58.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.831 "listen_address": { 00:20:58.831 "trtype": "TCP", 00:20:58.831 "adrfam": "IPv4", 00:20:58.831 "traddr": "10.0.0.2", 00:20:58.831 "trsvcid": "4420" 00:20:58.831 }, 00:20:58.831 "peer_address": { 00:20:58.831 "trtype": "TCP", 00:20:58.831 "adrfam": "IPv4", 00:20:58.831 "traddr": "10.0.0.1", 00:20:58.831 "trsvcid": "40910" 00:20:58.831 }, 00:20:58.831 "auth": { 00:20:58.831 "state": "completed", 00:20:58.831 "digest": "sha384", 00:20:58.831 "dhgroup": "ffdhe8192" 00:20:58.831 } 00:20:58.831 } 00:20:58.831 ]' 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.831 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.089 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:20:59.089 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.023 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.280 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:00.280 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.280 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.280 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.280 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.280 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.281 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.215 00:21:01.215 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.215 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.215 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.474 { 00:21:01.474 "cntlid": 95, 00:21:01.474 "qid": 0, 00:21:01.474 "state": "enabled", 00:21:01.474 "thread": "nvmf_tgt_poll_group_000", 00:21:01.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.474 "listen_address": { 00:21:01.474 "trtype": "TCP", 00:21:01.474 "adrfam": "IPv4", 00:21:01.474 "traddr": "10.0.0.2", 00:21:01.474 "trsvcid": "4420" 00:21:01.474 }, 00:21:01.474 "peer_address": { 00:21:01.474 "trtype": "TCP", 00:21:01.474 "adrfam": "IPv4", 00:21:01.474 "traddr": "10.0.0.1", 00:21:01.474 "trsvcid": "40936" 00:21:01.474 }, 00:21:01.474 "auth": { 00:21:01.474 "state": "completed", 00:21:01.474 "digest": "sha384", 00:21:01.474 "dhgroup": "ffdhe8192" 00:21:01.474 } 00:21:01.474 } 00:21:01.474 ]' 00:21:01.474 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.733 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.992 23:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:01.992 23:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.927 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.186 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.443 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.443 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.443 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.443 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.701 00:21:03.701 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.701 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.701 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.959 { 00:21:03.959 "cntlid": 97, 00:21:03.959 "qid": 0, 00:21:03.959 "state": "enabled", 00:21:03.959 "thread": "nvmf_tgt_poll_group_000", 00:21:03.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.959 "listen_address": { 00:21:03.959 "trtype": "TCP", 00:21:03.959 "adrfam": "IPv4", 00:21:03.959 "traddr": "10.0.0.2", 00:21:03.959 "trsvcid": "4420" 00:21:03.959 }, 00:21:03.959 "peer_address": { 00:21:03.959 "trtype": "TCP", 00:21:03.959 "adrfam": "IPv4", 00:21:03.959 "traddr": "10.0.0.1", 00:21:03.959 "trsvcid": "40962" 00:21:03.959 }, 00:21:03.959 "auth": { 00:21:03.959 "state": "completed", 00:21:03.959 "digest": "sha512", 00:21:03.959 "dhgroup": "null" 00:21:03.959 } 00:21:03.959 } 00:21:03.959 ]' 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.959 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.524 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:04.525 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.460 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.717 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.718 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.718 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.718 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.975 00:21:05.975 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.975 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.975 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.233 { 00:21:06.233 "cntlid": 99, 00:21:06.233 "qid": 0, 00:21:06.233 "state": "enabled", 00:21:06.233 "thread": "nvmf_tgt_poll_group_000", 00:21:06.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.233 "listen_address": { 00:21:06.233 "trtype": "TCP", 00:21:06.233 "adrfam": "IPv4", 00:21:06.233 "traddr": "10.0.0.2", 00:21:06.233 "trsvcid": "4420" 00:21:06.233 }, 00:21:06.233 "peer_address": { 00:21:06.233 "trtype": "TCP", 00:21:06.233 "adrfam": "IPv4", 00:21:06.233 "traddr": "10.0.0.1", 00:21:06.233 "trsvcid": "52328" 00:21:06.233 }, 00:21:06.233 "auth": { 00:21:06.233 "state": "completed", 00:21:06.233 "digest": "sha512", 00:21:06.233 "dhgroup": "null" 00:21:06.233 } 00:21:06.233 } 00:21:06.233 ]' 00:21:06.233 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.492 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.751 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:06.751 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.685 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.944 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.515 00:21:08.515 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.515 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.515 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.830 { 00:21:08.830 "cntlid": 101, 00:21:08.830 "qid": 0, 00:21:08.830 "state": "enabled", 00:21:08.830 "thread": "nvmf_tgt_poll_group_000", 00:21:08.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.830 "listen_address": { 00:21:08.830 "trtype": "TCP", 00:21:08.830 "adrfam": "IPv4", 00:21:08.830 "traddr": "10.0.0.2", 00:21:08.830 "trsvcid": "4420" 00:21:08.830 }, 00:21:08.830 "peer_address": { 00:21:08.830 "trtype": "TCP", 00:21:08.830 "adrfam": "IPv4", 00:21:08.830 "traddr": "10.0.0.1", 00:21:08.830 "trsvcid": "52368" 00:21:08.830 }, 00:21:08.830 "auth": { 00:21:08.830 "state": "completed", 00:21:08.830 "digest": "sha512", 00:21:08.830 "dhgroup": "null" 00:21:08.830 } 00:21:08.830 } 00:21:08.830 ]' 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.830 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.116 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:09.116 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.050 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.307 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:10.307 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.307 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.308 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.565 00:21:10.565 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.565 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.565 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.823 { 00:21:10.823 "cntlid": 103, 00:21:10.823 "qid": 0, 00:21:10.823 "state": "enabled", 00:21:10.823 "thread": "nvmf_tgt_poll_group_000", 00:21:10.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.823 "listen_address": { 00:21:10.823 "trtype": "TCP", 00:21:10.823 "adrfam": "IPv4", 00:21:10.823 "traddr": "10.0.0.2", 00:21:10.823 "trsvcid": "4420" 00:21:10.823 }, 00:21:10.823 "peer_address": { 00:21:10.823 "trtype": "TCP", 00:21:10.823 "adrfam": "IPv4", 00:21:10.823 "traddr": "10.0.0.1", 00:21:10.823 "trsvcid": "52384" 00:21:10.823 }, 00:21:10.823 "auth": { 00:21:10.823 "state": "completed", 00:21:10.823 "digest": "sha512", 00:21:10.823 "dhgroup": "null" 00:21:10.823 } 00:21:10.823 } 00:21:10.823 ]' 00:21:10.823 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.081 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.339 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:11.339 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.273 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.531 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.788 00:21:12.788 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.788 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.788 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.047 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.047 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.047 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.047 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.047 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.047 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.047 { 00:21:13.047 "cntlid": 105, 00:21:13.047 "qid": 0, 00:21:13.047 "state": "enabled", 00:21:13.047 "thread": "nvmf_tgt_poll_group_000", 00:21:13.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.047 "listen_address": { 00:21:13.047 "trtype": "TCP", 00:21:13.047 "adrfam": "IPv4", 00:21:13.047 "traddr": "10.0.0.2", 00:21:13.047 "trsvcid": "4420" 00:21:13.047 }, 00:21:13.047 "peer_address": { 00:21:13.047 "trtype": "TCP", 00:21:13.047 "adrfam": "IPv4", 00:21:13.047 "traddr": "10.0.0.1", 00:21:13.047 "trsvcid": "52424" 00:21:13.047 }, 00:21:13.047 "auth": { 00:21:13.047 "state": "completed", 00:21:13.047 "digest": "sha512", 00:21:13.047 "dhgroup": "ffdhe2048" 00:21:13.047 } 00:21:13.047 } 00:21:13.047 ]' 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.306 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.564 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:13.564 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.497 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.755 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.321 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.321 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.579 { 00:21:15.579 "cntlid": 107, 00:21:15.579 "qid": 0, 00:21:15.579 "state": "enabled", 00:21:15.579 "thread": "nvmf_tgt_poll_group_000", 00:21:15.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.579 "listen_address": { 00:21:15.579 "trtype": "TCP", 00:21:15.579 "adrfam": "IPv4", 00:21:15.579 "traddr": "10.0.0.2", 00:21:15.579 "trsvcid": "4420" 00:21:15.579 }, 00:21:15.579 "peer_address": { 00:21:15.579 "trtype": "TCP", 00:21:15.579 "adrfam": "IPv4", 00:21:15.579 "traddr": "10.0.0.1", 00:21:15.579 "trsvcid": "38558" 00:21:15.579 }, 00:21:15.579 "auth": { 00:21:15.579 "state": "completed", 00:21:15.579 "digest": "sha512", 00:21:15.579 "dhgroup": "ffdhe2048" 00:21:15.579 } 00:21:15.579 } 00:21:15.579 ]' 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.579 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.837 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:15.837 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.770 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.028 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.595 00:21:17.595 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.595 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.595 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.853 { 00:21:17.853 "cntlid": 109, 00:21:17.853 "qid": 0, 00:21:17.853 "state": "enabled", 00:21:17.853 "thread": "nvmf_tgt_poll_group_000", 00:21:17.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.853 "listen_address": { 00:21:17.853 "trtype": "TCP", 00:21:17.853 "adrfam": "IPv4", 00:21:17.853 "traddr": "10.0.0.2", 00:21:17.853 "trsvcid": "4420" 00:21:17.853 }, 00:21:17.853 "peer_address": { 00:21:17.853 "trtype": "TCP", 00:21:17.853 "adrfam": "IPv4", 00:21:17.853 "traddr": "10.0.0.1", 00:21:17.853 "trsvcid": "38580" 00:21:17.853 }, 00:21:17.853 "auth": { 00:21:17.853 "state": "completed", 00:21:17.853 "digest": "sha512", 00:21:17.853 "dhgroup": "ffdhe2048" 00:21:17.853 } 00:21:17.853 } 00:21:17.853 ]' 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.853 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.111 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:18.111 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.046 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.304 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.869 00:21:19.869 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.869 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.869 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.127 { 00:21:20.127 "cntlid": 111, 00:21:20.127 "qid": 0, 00:21:20.127 "state": "enabled", 00:21:20.127 "thread": "nvmf_tgt_poll_group_000", 00:21:20.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.127 "listen_address": { 00:21:20.127 "trtype": "TCP", 00:21:20.127 "adrfam": "IPv4", 00:21:20.127 "traddr": "10.0.0.2", 00:21:20.127 "trsvcid": "4420" 00:21:20.127 }, 00:21:20.127 "peer_address": { 00:21:20.127 "trtype": "TCP", 00:21:20.127 "adrfam": "IPv4", 00:21:20.127 "traddr": "10.0.0.1", 00:21:20.127 "trsvcid": "38612" 00:21:20.127 }, 00:21:20.127 "auth": { 00:21:20.127 "state": "completed", 00:21:20.127 "digest": "sha512", 00:21:20.127 "dhgroup": "ffdhe2048" 00:21:20.127 } 00:21:20.127 } 00:21:20.127 ]' 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.127 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.128 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.128 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.128 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.128 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.385 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:20.386 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:21.318 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.576 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.834 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.093 00:21:22.093 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.093 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.093 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.351 { 00:21:22.351 "cntlid": 113, 00:21:22.351 "qid": 0, 00:21:22.351 "state": "enabled", 00:21:22.351 "thread": "nvmf_tgt_poll_group_000", 00:21:22.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.351 "listen_address": { 00:21:22.351 "trtype": "TCP", 00:21:22.351 "adrfam": "IPv4", 00:21:22.351 "traddr": "10.0.0.2", 00:21:22.351 "trsvcid": "4420" 00:21:22.351 }, 00:21:22.351 "peer_address": { 00:21:22.351 "trtype": "TCP", 00:21:22.351 "adrfam": "IPv4", 00:21:22.351 "traddr": "10.0.0.1", 00:21:22.351 "trsvcid": "38644" 00:21:22.351 }, 00:21:22.351 "auth": { 00:21:22.351 "state": "completed", 00:21:22.351 "digest": "sha512", 00:21:22.351 "dhgroup": "ffdhe3072" 00:21:22.351 } 00:21:22.351 } 00:21:22.351 ]' 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.351 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.609 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.609 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.609 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.609 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.609 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.867 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:22.867 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.800 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.058 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.624 00:21:24.624 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.624 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.624 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.882 { 00:21:24.882 "cntlid": 115, 00:21:24.882 "qid": 0, 00:21:24.882 "state": "enabled", 00:21:24.882 "thread": "nvmf_tgt_poll_group_000", 00:21:24.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.882 "listen_address": { 00:21:24.882 "trtype": "TCP", 00:21:24.882 "adrfam": "IPv4", 00:21:24.882 "traddr": "10.0.0.2", 00:21:24.882 "trsvcid": "4420" 00:21:24.882 }, 00:21:24.882 "peer_address": { 00:21:24.882 "trtype": "TCP", 00:21:24.882 "adrfam": "IPv4", 00:21:24.882 "traddr": "10.0.0.1", 00:21:24.882 "trsvcid": "46170" 00:21:24.882 }, 00:21:24.882 "auth": { 00:21:24.882 "state": "completed", 00:21:24.882 "digest": "sha512", 00:21:24.882 "dhgroup": "ffdhe3072" 00:21:24.882 } 00:21:24.882 } 00:21:24.882 ]' 00:21:24.882 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.883 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.141 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:25.141 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.075 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.333 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.899 00:21:26.899 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.899 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.899 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.157 { 00:21:27.157 "cntlid": 117, 00:21:27.157 "qid": 0, 00:21:27.157 "state": "enabled", 00:21:27.157 "thread": "nvmf_tgt_poll_group_000", 00:21:27.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.157 "listen_address": { 00:21:27.157 "trtype": "TCP", 00:21:27.157 "adrfam": "IPv4", 00:21:27.157 "traddr": "10.0.0.2", 00:21:27.157 "trsvcid": "4420" 00:21:27.157 }, 00:21:27.157 "peer_address": { 00:21:27.157 "trtype": "TCP", 00:21:27.157 "adrfam": "IPv4", 00:21:27.157 "traddr": "10.0.0.1", 00:21:27.157 "trsvcid": "46200" 00:21:27.157 }, 00:21:27.157 "auth": { 00:21:27.157 "state": "completed", 00:21:27.157 "digest": "sha512", 00:21:27.157 "dhgroup": "ffdhe3072" 00:21:27.157 } 00:21:27.157 } 00:21:27.157 ]' 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.157 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.416 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:27.416 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.349 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.913 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.171 00:21:29.171 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.171 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.171 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.429 { 00:21:29.429 "cntlid": 119, 00:21:29.429 "qid": 0, 00:21:29.429 "state": "enabled", 00:21:29.429 "thread": "nvmf_tgt_poll_group_000", 00:21:29.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.429 "listen_address": { 00:21:29.429 "trtype": "TCP", 00:21:29.429 "adrfam": "IPv4", 00:21:29.429 "traddr": "10.0.0.2", 00:21:29.429 "trsvcid": "4420" 00:21:29.429 }, 00:21:29.429 "peer_address": { 00:21:29.429 "trtype": "TCP", 00:21:29.429 "adrfam": "IPv4", 00:21:29.429 "traddr": "10.0.0.1", 00:21:29.429 "trsvcid": "46236" 00:21:29.429 }, 00:21:29.429 "auth": { 00:21:29.429 "state": "completed", 00:21:29.429 "digest": "sha512", 00:21:29.429 "dhgroup": "ffdhe3072" 00:21:29.429 } 00:21:29.429 } 00:21:29.429 ]' 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.429 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.688 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:29.688 23:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.060 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.060 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.061 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.625 00:21:31.625 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.625 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.625 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.884 { 00:21:31.884 "cntlid": 121, 00:21:31.884 "qid": 0, 00:21:31.884 "state": "enabled", 00:21:31.884 "thread": "nvmf_tgt_poll_group_000", 00:21:31.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.884 "listen_address": { 00:21:31.884 "trtype": "TCP", 00:21:31.884 "adrfam": "IPv4", 00:21:31.884 "traddr": "10.0.0.2", 00:21:31.884 "trsvcid": "4420" 00:21:31.884 }, 00:21:31.884 "peer_address": { 00:21:31.884 "trtype": "TCP", 00:21:31.884 "adrfam": "IPv4", 00:21:31.884 "traddr": "10.0.0.1", 00:21:31.884 "trsvcid": "46258" 00:21:31.884 }, 00:21:31.884 "auth": { 00:21:31.884 "state": "completed", 00:21:31.884 "digest": "sha512", 00:21:31.884 "dhgroup": "ffdhe4096" 00:21:31.884 } 00:21:31.884 } 00:21:31.884 ]' 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.884 23:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.884 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.884 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.884 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.142 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:32.142 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:33.074 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.332 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.590 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.848 00:21:33.848 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.848 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.848 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.413 { 00:21:34.413 "cntlid": 123, 00:21:34.413 "qid": 0, 00:21:34.413 "state": "enabled", 00:21:34.413 "thread": "nvmf_tgt_poll_group_000", 00:21:34.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.413 "listen_address": { 00:21:34.413 "trtype": "TCP", 00:21:34.413 "adrfam": "IPv4", 00:21:34.413 "traddr": "10.0.0.2", 00:21:34.413 "trsvcid": "4420" 00:21:34.413 }, 00:21:34.413 "peer_address": { 00:21:34.413 "trtype": "TCP", 00:21:34.413 "adrfam": "IPv4", 00:21:34.413 "traddr": "10.0.0.1", 00:21:34.413 "trsvcid": "51668" 00:21:34.413 }, 00:21:34.413 "auth": { 00:21:34.413 "state": "completed", 00:21:34.413 "digest": "sha512", 00:21:34.413 "dhgroup": "ffdhe4096" 00:21:34.413 } 00:21:34.413 } 00:21:34.413 ]' 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.413 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.671 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:34.671 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.604 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.863 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.428 00:21:36.428 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.428 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.428 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.688 { 00:21:36.688 "cntlid": 125, 00:21:36.688 "qid": 0, 00:21:36.688 "state": "enabled", 00:21:36.688 "thread": "nvmf_tgt_poll_group_000", 00:21:36.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.688 "listen_address": { 00:21:36.688 "trtype": "TCP", 00:21:36.688 "adrfam": "IPv4", 00:21:36.688 "traddr": "10.0.0.2", 00:21:36.688 "trsvcid": "4420" 00:21:36.688 }, 00:21:36.688 "peer_address": { 00:21:36.688 "trtype": "TCP", 00:21:36.688 "adrfam": "IPv4", 00:21:36.688 "traddr": "10.0.0.1", 00:21:36.688 "trsvcid": "51694" 00:21:36.688 }, 00:21:36.688 "auth": { 00:21:36.688 "state": "completed", 00:21:36.688 "digest": "sha512", 00:21:36.688 "dhgroup": "ffdhe4096" 00:21:36.688 } 00:21:36.688 } 00:21:36.688 ]' 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.688 23:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.946 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:36.946 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:38.319 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.319 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.320 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.902 00:21:38.902 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.902 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.902 23:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.193 { 00:21:39.193 "cntlid": 127, 00:21:39.193 "qid": 0, 00:21:39.193 "state": "enabled", 00:21:39.193 "thread": "nvmf_tgt_poll_group_000", 00:21:39.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.193 "listen_address": { 00:21:39.193 "trtype": "TCP", 00:21:39.193 "adrfam": "IPv4", 00:21:39.193 "traddr": "10.0.0.2", 00:21:39.193 "trsvcid": "4420" 00:21:39.193 }, 00:21:39.193 "peer_address": { 00:21:39.193 "trtype": "TCP", 00:21:39.193 "adrfam": "IPv4", 00:21:39.193 "traddr": "10.0.0.1", 00:21:39.193 "trsvcid": "51724" 00:21:39.193 }, 00:21:39.193 "auth": { 00:21:39.193 "state": "completed", 00:21:39.193 "digest": "sha512", 00:21:39.193 "dhgroup": "ffdhe4096" 00:21:39.193 } 00:21:39.193 } 00:21:39.193 ]' 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.193 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.194 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.457 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:39.457 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.390 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.648 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:40.648 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.648 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.648 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.649 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.215 00:21:41.215 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.215 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.215 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.473 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.473 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.474 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.474 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.474 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.474 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.474 { 00:21:41.474 "cntlid": 129, 00:21:41.474 "qid": 0, 00:21:41.474 "state": "enabled", 00:21:41.474 "thread": "nvmf_tgt_poll_group_000", 00:21:41.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.474 "listen_address": { 00:21:41.474 "trtype": "TCP", 00:21:41.474 "adrfam": "IPv4", 00:21:41.474 "traddr": "10.0.0.2", 00:21:41.474 "trsvcid": "4420" 00:21:41.474 }, 00:21:41.474 "peer_address": { 00:21:41.474 "trtype": "TCP", 00:21:41.474 "adrfam": "IPv4", 00:21:41.474 "traddr": "10.0.0.1", 00:21:41.474 "trsvcid": "51756" 00:21:41.474 }, 00:21:41.474 "auth": { 00:21:41.474 "state": "completed", 00:21:41.474 "digest": "sha512", 00:21:41.474 "dhgroup": "ffdhe6144" 00:21:41.474 } 00:21:41.474 } 00:21:41.474 ]' 00:21:41.474 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.732 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.989 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:41.989 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.923 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.181 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.182 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.746 00:21:43.746 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.746 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.746 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.009 { 00:21:44.009 "cntlid": 131, 00:21:44.009 "qid": 0, 00:21:44.009 "state": "enabled", 00:21:44.009 "thread": "nvmf_tgt_poll_group_000", 00:21:44.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.009 "listen_address": { 00:21:44.009 "trtype": "TCP", 00:21:44.009 "adrfam": "IPv4", 00:21:44.009 "traddr": "10.0.0.2", 00:21:44.009 "trsvcid": "4420" 00:21:44.009 }, 00:21:44.009 "peer_address": { 00:21:44.009 "trtype": "TCP", 00:21:44.009 "adrfam": "IPv4", 00:21:44.009 "traddr": "10.0.0.1", 00:21:44.009 "trsvcid": "51782" 00:21:44.009 }, 00:21:44.009 "auth": { 00:21:44.009 "state": "completed", 00:21:44.009 "digest": "sha512", 00:21:44.009 "dhgroup": "ffdhe6144" 00:21:44.009 } 00:21:44.009 } 00:21:44.009 ]' 00:21:44.009 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.266 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.523 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:44.523 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.455 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.713 23:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.279 00:21:46.279 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.279 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.279 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.538 { 00:21:46.538 "cntlid": 133, 00:21:46.538 "qid": 0, 00:21:46.538 "state": "enabled", 00:21:46.538 "thread": "nvmf_tgt_poll_group_000", 00:21:46.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.538 "listen_address": { 00:21:46.538 "trtype": "TCP", 00:21:46.538 "adrfam": "IPv4", 00:21:46.538 "traddr": "10.0.0.2", 00:21:46.538 "trsvcid": "4420" 00:21:46.538 }, 00:21:46.538 "peer_address": { 00:21:46.538 "trtype": "TCP", 00:21:46.538 "adrfam": "IPv4", 00:21:46.538 "traddr": "10.0.0.1", 00:21:46.538 "trsvcid": "60206" 00:21:46.538 }, 00:21:46.538 "auth": { 00:21:46.538 "state": "completed", 00:21:46.538 "digest": "sha512", 00:21:46.538 "dhgroup": "ffdhe6144" 00:21:46.538 } 00:21:46.538 } 00:21:46.538 ]' 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.538 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.796 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.796 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.796 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.796 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.796 23:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.054 23:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:47.054 23:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.987 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.245 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.811 00:21:48.811 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.811 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.811 23:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.070 { 00:21:49.070 "cntlid": 135, 00:21:49.070 "qid": 0, 00:21:49.070 "state": "enabled", 00:21:49.070 "thread": "nvmf_tgt_poll_group_000", 00:21:49.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.070 "listen_address": { 00:21:49.070 "trtype": "TCP", 00:21:49.070 "adrfam": "IPv4", 00:21:49.070 "traddr": "10.0.0.2", 00:21:49.070 "trsvcid": "4420" 00:21:49.070 }, 00:21:49.070 "peer_address": { 00:21:49.070 "trtype": "TCP", 00:21:49.070 "adrfam": "IPv4", 00:21:49.070 "traddr": "10.0.0.1", 00:21:49.070 "trsvcid": "60230" 00:21:49.070 }, 00:21:49.070 "auth": { 00:21:49.070 "state": "completed", 00:21:49.070 "digest": "sha512", 00:21:49.070 "dhgroup": "ffdhe6144" 00:21:49.070 } 00:21:49.070 } 00:21:49.070 ]' 00:21:49.070 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.328 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.585 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:49.585 23:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.518 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.776 23:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.709 00:21:51.709 23:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.709 23:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.709 23:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.967 { 00:21:51.967 "cntlid": 137, 00:21:51.967 "qid": 0, 00:21:51.967 "state": "enabled", 00:21:51.967 "thread": "nvmf_tgt_poll_group_000", 00:21:51.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.967 "listen_address": { 00:21:51.967 "trtype": "TCP", 00:21:51.967 "adrfam": "IPv4", 00:21:51.967 "traddr": "10.0.0.2", 00:21:51.967 "trsvcid": "4420" 00:21:51.967 }, 00:21:51.967 "peer_address": { 00:21:51.967 "trtype": "TCP", 00:21:51.967 "adrfam": "IPv4", 00:21:51.967 "traddr": "10.0.0.1", 00:21:51.967 "trsvcid": "60248" 00:21:51.967 }, 00:21:51.967 "auth": { 00:21:51.967 "state": "completed", 00:21:51.967 "digest": "sha512", 00:21:51.967 "dhgroup": "ffdhe8192" 00:21:51.967 } 00:21:51.967 } 00:21:51.967 ]' 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.967 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.968 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.968 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.968 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.535 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:52.535 23:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.469 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.727 23:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.660 00:21:54.660 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.660 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.660 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.918 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.918 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.919 { 00:21:54.919 "cntlid": 139, 00:21:54.919 "qid": 0, 00:21:54.919 "state": "enabled", 00:21:54.919 "thread": "nvmf_tgt_poll_group_000", 00:21:54.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.919 "listen_address": { 00:21:54.919 "trtype": "TCP", 00:21:54.919 "adrfam": "IPv4", 00:21:54.919 "traddr": "10.0.0.2", 00:21:54.919 "trsvcid": "4420" 00:21:54.919 }, 00:21:54.919 "peer_address": { 00:21:54.919 "trtype": "TCP", 00:21:54.919 "adrfam": "IPv4", 00:21:54.919 "traddr": "10.0.0.1", 00:21:54.919 "trsvcid": "43490" 00:21:54.919 }, 00:21:54.919 "auth": { 00:21:54.919 "state": "completed", 00:21:54.919 "digest": "sha512", 00:21:54.919 "dhgroup": "ffdhe8192" 00:21:54.919 } 00:21:54.919 } 00:21:54.919 ]' 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.919 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.919 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.919 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.919 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.919 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.919 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.177 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:55.177 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: --dhchap-ctrl-secret DHHC-1:02:NDZkOTJhNjAwMGI5MzEyYmMzMjNjZGZmMmI4NzdlYTg3NmY4ZTllOTEwOGFiOTVmZsKlDw==: 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.111 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.369 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.370 23:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.303 00:21:57.303 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.303 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.303 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.561 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.561 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.561 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.561 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.561 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.561 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.561 { 00:21:57.561 "cntlid": 141, 00:21:57.561 "qid": 0, 00:21:57.561 "state": "enabled", 00:21:57.561 "thread": "nvmf_tgt_poll_group_000", 00:21:57.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.562 "listen_address": { 00:21:57.562 "trtype": "TCP", 00:21:57.562 "adrfam": "IPv4", 00:21:57.562 "traddr": "10.0.0.2", 00:21:57.562 "trsvcid": "4420" 00:21:57.562 }, 00:21:57.562 "peer_address": { 00:21:57.562 "trtype": "TCP", 00:21:57.562 "adrfam": "IPv4", 00:21:57.562 "traddr": "10.0.0.1", 00:21:57.562 "trsvcid": "43512" 00:21:57.562 }, 00:21:57.562 "auth": { 00:21:57.562 "state": "completed", 00:21:57.562 "digest": "sha512", 00:21:57.562 "dhgroup": "ffdhe8192" 00:21:57.562 } 00:21:57.562 } 00:21:57.562 ]' 00:21:57.562 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.819 23:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.077 23:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:58.077 23:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:01:ZjUwN2YyZTJiNGQzYWFiZjk4ZGU0ZDhlNGEzMThkODRB6ahZ: 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.011 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.269 23:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.204 00:22:00.204 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.204 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.204 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.463 { 00:22:00.463 "cntlid": 143, 00:22:00.463 "qid": 0, 00:22:00.463 "state": "enabled", 00:22:00.463 "thread": "nvmf_tgt_poll_group_000", 00:22:00.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.463 "listen_address": { 00:22:00.463 "trtype": "TCP", 00:22:00.463 "adrfam": "IPv4", 00:22:00.463 "traddr": "10.0.0.2", 00:22:00.463 "trsvcid": "4420" 00:22:00.463 }, 00:22:00.463 "peer_address": { 00:22:00.463 "trtype": "TCP", 00:22:00.463 "adrfam": "IPv4", 00:22:00.463 "traddr": "10.0.0.1", 00:22:00.463 "trsvcid": "43544" 00:22:00.463 }, 00:22:00.463 "auth": { 00:22:00.463 "state": "completed", 00:22:00.463 "digest": "sha512", 00:22:00.463 "dhgroup": "ffdhe8192" 00:22:00.463 } 00:22:00.463 } 00:22:00.463 ]' 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.463 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.028 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:22:01.028 23:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.962 23:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.220 23:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.154 00:22:03.154 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.154 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.154 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.412 { 00:22:03.412 "cntlid": 145, 00:22:03.412 "qid": 0, 00:22:03.412 "state": "enabled", 00:22:03.412 "thread": "nvmf_tgt_poll_group_000", 00:22:03.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.412 "listen_address": { 00:22:03.412 "trtype": "TCP", 00:22:03.412 "adrfam": "IPv4", 00:22:03.412 "traddr": "10.0.0.2", 00:22:03.412 "trsvcid": "4420" 00:22:03.412 }, 00:22:03.412 "peer_address": { 00:22:03.412 "trtype": "TCP", 00:22:03.412 "adrfam": "IPv4", 00:22:03.412 "traddr": "10.0.0.1", 00:22:03.412 "trsvcid": "43584" 00:22:03.412 }, 00:22:03.412 "auth": { 00:22:03.412 "state": "completed", 00:22:03.412 "digest": "sha512", 00:22:03.412 "dhgroup": "ffdhe8192" 00:22:03.412 } 00:22:03.412 } 00:22:03.412 ]' 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.412 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.669 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:22:03.669 23:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjQ1YzgzOGFiMTEwZGM3N2U1ZGVhOTcxM2NiNmI3ODE1ZmYyOGI2ODBkNTcwYzIxV4Je+Q==: --dhchap-ctrl-secret DHHC-1:03:NGUwMWY3NzRlYjZlMmE5ZWZiYTAyNzQ2NTlkYmM2ODk5Y2M5YWQ3Y2FlM2NmMTQzMmZiMDE0MGM2YmFjMThkNab9iIA=: 00:22:04.599 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.599 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.599 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.599 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:04.857 23:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:05.791 request: 00:22:05.791 { 00:22:05.791 "name": "nvme0", 00:22:05.791 "trtype": "tcp", 00:22:05.791 "traddr": "10.0.0.2", 00:22:05.791 "adrfam": "ipv4", 00:22:05.791 "trsvcid": "4420", 00:22:05.791 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.791 "prchk_reftag": false, 00:22:05.791 "prchk_guard": false, 00:22:05.791 "hdgst": false, 00:22:05.791 "ddgst": false, 00:22:05.791 "dhchap_key": "key2", 00:22:05.791 "allow_unrecognized_csi": false, 00:22:05.791 "method": "bdev_nvme_attach_controller", 00:22:05.791 "req_id": 1 00:22:05.791 } 00:22:05.791 Got JSON-RPC error response 00:22:05.791 response: 00:22:05.791 { 00:22:05.791 "code": -5, 00:22:05.791 "message": "Input/output error" 00:22:05.791 } 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.791 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.356 request: 00:22:06.356 { 00:22:06.356 "name": "nvme0", 00:22:06.356 "trtype": "tcp", 00:22:06.356 "traddr": "10.0.0.2", 00:22:06.356 "adrfam": "ipv4", 00:22:06.356 "trsvcid": "4420", 00:22:06.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.356 "prchk_reftag": false, 00:22:06.356 "prchk_guard": false, 00:22:06.356 "hdgst": false, 00:22:06.356 "ddgst": false, 00:22:06.356 "dhchap_key": "key1", 00:22:06.356 "dhchap_ctrlr_key": "ckey2", 00:22:06.356 "allow_unrecognized_csi": false, 00:22:06.356 "method": "bdev_nvme_attach_controller", 00:22:06.356 "req_id": 1 00:22:06.356 } 00:22:06.356 Got JSON-RPC error response 00:22:06.356 response: 00:22:06.356 { 00:22:06.356 "code": -5, 00:22:06.356 "message": "Input/output error" 00:22:06.356 } 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.614 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.548 request: 00:22:07.548 { 00:22:07.548 "name": "nvme0", 00:22:07.548 "trtype": "tcp", 00:22:07.548 "traddr": "10.0.0.2", 00:22:07.548 "adrfam": "ipv4", 00:22:07.548 "trsvcid": "4420", 00:22:07.548 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.548 "prchk_reftag": false, 00:22:07.548 "prchk_guard": false, 00:22:07.548 "hdgst": false, 00:22:07.548 "ddgst": false, 00:22:07.548 "dhchap_key": "key1", 00:22:07.548 "dhchap_ctrlr_key": "ckey1", 00:22:07.548 "allow_unrecognized_csi": false, 00:22:07.548 "method": "bdev_nvme_attach_controller", 00:22:07.548 "req_id": 1 00:22:07.548 } 00:22:07.548 Got JSON-RPC error response 00:22:07.548 response: 00:22:07.548 { 00:22:07.548 "code": -5, 00:22:07.548 "message": "Input/output error" 00:22:07.548 } 00:22:07.548 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:07.548 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.548 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.548 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3463846 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3463846 ']' 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3463846 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3463846 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3463846' 00:22:07.549 killing process with pid 3463846 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3463846 00:22:07.549 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3463846 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3487378 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3487378 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3487378 ']' 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:08.482 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3487378 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3487378 ']' 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:09.878 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.878 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:09.878 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:09.878 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:09.878 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.878 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 null0 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cEs 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.pSs ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pSs 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OcH 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.99N ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.99N 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OAf 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.TpB ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TpB 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xtn 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.519 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.893 nvme0n1 00:22:11.893 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.893 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.893 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.157 { 00:22:12.157 "cntlid": 1, 00:22:12.157 "qid": 0, 00:22:12.157 "state": "enabled", 00:22:12.157 "thread": "nvmf_tgt_poll_group_000", 00:22:12.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.157 "listen_address": { 00:22:12.157 "trtype": "TCP", 00:22:12.157 "adrfam": "IPv4", 00:22:12.157 "traddr": "10.0.0.2", 00:22:12.157 "trsvcid": "4420" 00:22:12.157 }, 00:22:12.157 "peer_address": { 00:22:12.157 "trtype": "TCP", 00:22:12.157 "adrfam": "IPv4", 00:22:12.157 "traddr": "10.0.0.1", 00:22:12.157 "trsvcid": "34428" 00:22:12.157 }, 00:22:12.157 "auth": { 00:22:12.157 "state": "completed", 00:22:12.157 "digest": "sha512", 00:22:12.157 "dhgroup": "ffdhe8192" 00:22:12.157 } 00:22:12.157 } 00:22:12.157 ]' 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.157 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.420 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.420 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.420 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.421 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.421 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.679 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:22:12.679 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:22:13.612 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:13.613 23:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.871 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.129 request: 00:22:14.129 { 00:22:14.129 "name": "nvme0", 00:22:14.129 "trtype": "tcp", 00:22:14.129 "traddr": "10.0.0.2", 00:22:14.129 "adrfam": "ipv4", 00:22:14.129 "trsvcid": "4420", 00:22:14.129 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.129 "prchk_reftag": false, 00:22:14.129 "prchk_guard": false, 00:22:14.129 "hdgst": false, 00:22:14.129 "ddgst": false, 00:22:14.129 "dhchap_key": "key3", 00:22:14.129 "allow_unrecognized_csi": false, 00:22:14.129 "method": "bdev_nvme_attach_controller", 00:22:14.129 "req_id": 1 00:22:14.129 } 00:22:14.129 Got JSON-RPC error response 00:22:14.129 response: 00:22:14.129 { 00:22:14.129 "code": -5, 00:22:14.129 "message": "Input/output error" 00:22:14.129 } 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:14.129 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.695 request: 00:22:14.695 { 00:22:14.695 "name": "nvme0", 00:22:14.695 "trtype": "tcp", 00:22:14.695 "traddr": "10.0.0.2", 00:22:14.695 "adrfam": "ipv4", 00:22:14.695 "trsvcid": "4420", 00:22:14.695 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.695 "prchk_reftag": false, 00:22:14.695 "prchk_guard": false, 00:22:14.695 "hdgst": false, 00:22:14.695 "ddgst": false, 00:22:14.695 "dhchap_key": "key3", 00:22:14.695 "allow_unrecognized_csi": false, 00:22:14.695 "method": "bdev_nvme_attach_controller", 00:22:14.695 "req_id": 1 00:22:14.695 } 00:22:14.695 Got JSON-RPC error response 00:22:14.695 response: 00:22:14.695 { 00:22:14.695 "code": -5, 00:22:14.695 "message": "Input/output error" 00:22:14.695 } 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:14.695 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.954 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.954 23:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.212 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.778 request: 00:22:15.778 { 00:22:15.778 "name": "nvme0", 00:22:15.778 "trtype": "tcp", 00:22:15.778 "traddr": "10.0.0.2", 00:22:15.778 "adrfam": "ipv4", 00:22:15.778 "trsvcid": "4420", 00:22:15.778 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.778 "prchk_reftag": false, 00:22:15.778 "prchk_guard": false, 00:22:15.778 "hdgst": false, 00:22:15.778 "ddgst": false, 00:22:15.778 "dhchap_key": "key0", 00:22:15.778 "dhchap_ctrlr_key": "key1", 00:22:15.778 "allow_unrecognized_csi": false, 00:22:15.778 "method": "bdev_nvme_attach_controller", 00:22:15.778 "req_id": 1 00:22:15.778 } 00:22:15.778 Got JSON-RPC error response 00:22:15.778 response: 00:22:15.778 { 00:22:15.778 "code": -5, 00:22:15.778 "message": "Input/output error" 00:22:15.778 } 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:15.778 23:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:16.036 nvme0n1 00:22:16.036 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:16.036 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.036 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:16.294 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.294 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.294 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.552 23:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:17.924 nvme0n1 00:22:17.925 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:17.925 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:17.925 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:18.182 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.748 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.748 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:22:18.748 23:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: --dhchap-ctrl-secret DHHC-1:03:YmFkZTFkYjM1NmIzZjdhZDcwNTZkYTIyZDA2ZGMzZjY3YmFkYzc1YjhhMWU4YWY4NzA5ZGNmZjdiNzU5OTZhNFEvb2I=: 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.682 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:19.940 23:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.874 request: 00:22:20.874 { 00:22:20.874 "name": "nvme0", 00:22:20.874 "trtype": "tcp", 00:22:20.874 "traddr": "10.0.0.2", 00:22:20.874 "adrfam": "ipv4", 00:22:20.874 "trsvcid": "4420", 00:22:20.874 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.874 "prchk_reftag": false, 00:22:20.874 "prchk_guard": false, 00:22:20.874 "hdgst": false, 00:22:20.874 "ddgst": false, 00:22:20.874 "dhchap_key": "key1", 00:22:20.874 "allow_unrecognized_csi": false, 00:22:20.874 "method": "bdev_nvme_attach_controller", 00:22:20.874 "req_id": 1 00:22:20.874 } 00:22:20.874 Got JSON-RPC error response 00:22:20.874 response: 00:22:20.874 { 00:22:20.874 "code": -5, 00:22:20.874 "message": "Input/output error" 00:22:20.874 } 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.874 23:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:22.247 nvme0n1 00:22:22.247 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:22.247 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:22.247 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.506 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.506 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.506 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:22.764 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:23.022 nvme0n1 00:22:23.022 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:23.022 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:23.022 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.280 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.280 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.280 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: '' 2s 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: ]] 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmZjYTBkYzg3NmUwMjBkM2MxMmUwZTc3ZmQyNmFkYmSm8RZ+: 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:23.538 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: 2s 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: ]] 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzA1YmY1ZmUwMTBhYzU2MjNjYmUzYTliYTNiYTNkMTg1YjFjZjE5NDY1ZGVhZDFlEpoQqA==: 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:26.066 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.965 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.338 nvme0n1 00:22:29.338 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.338 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.338 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.338 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.338 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.338 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.271 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.529 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.529 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:30.529 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:30.787 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:30.787 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:30.787 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.045 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.979 request: 00:22:31.979 { 00:22:31.979 "name": "nvme0", 00:22:31.979 "dhchap_key": "key1", 00:22:31.979 "dhchap_ctrlr_key": "key3", 00:22:31.979 "method": "bdev_nvme_set_keys", 00:22:31.979 "req_id": 1 00:22:31.979 } 00:22:31.979 Got JSON-RPC error response 00:22:31.979 response: 00:22:31.979 { 00:22:31.979 "code": -13, 00:22:31.979 "message": "Permission denied" 00:22:31.979 } 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:31.979 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.236 23:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:32.236 23:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:33.169 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:33.169 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:33.169 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.427 23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.325 nvme0n1 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.325 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.890 request: 00:22:35.890 { 00:22:35.890 "name": "nvme0", 00:22:35.890 "dhchap_key": "key2", 00:22:35.890 "dhchap_ctrlr_key": "key0", 00:22:35.890 "method": "bdev_nvme_set_keys", 00:22:35.890 "req_id": 1 00:22:35.890 } 00:22:35.890 Got JSON-RPC error response 00:22:35.890 response: 00:22:35.890 { 00:22:35.890 "code": -13, 00:22:35.890 "message": "Permission denied" 00:22:35.890 } 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:35.890 23:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.148 23:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:36.148 23:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:37.081 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:37.081 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:37.081 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.647 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:37.647 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:37.647 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:37.647 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3463998 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3463998 ']' 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3463998 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3463998 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3463998' 00:22:37.648 killing process with pid 3463998 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3463998 00:22:37.648 23:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3463998 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.174 rmmod nvme_tcp 00:22:40.174 rmmod nvme_fabrics 00:22:40.174 rmmod nvme_keyring 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3487378 ']' 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3487378 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3487378 ']' 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3487378 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3487378 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3487378' 00:22:40.174 killing process with pid 3487378 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3487378 00:22:40.174 23:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3487378 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.108 23:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cEs /tmp/spdk.key-sha256.OcH /tmp/spdk.key-sha384.OAf /tmp/spdk.key-sha512.xtn /tmp/spdk.key-sha512.pSs /tmp/spdk.key-sha384.99N /tmp/spdk.key-sha256.TpB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:43.642 00:22:43.642 real 3m45.740s 00:22:43.642 user 8m43.344s 00:22:43.642 sys 0m27.587s 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.642 ************************************ 00:22:43.642 END TEST nvmf_auth_target 00:22:43.642 ************************************ 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:43.642 ************************************ 00:22:43.642 START TEST nvmf_bdevio_no_huge 00:22:43.642 ************************************ 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:43.642 * Looking for test storage... 00:22:43.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.642 --rc genhtml_branch_coverage=1 00:22:43.642 --rc genhtml_function_coverage=1 00:22:43.642 --rc genhtml_legend=1 00:22:43.642 --rc geninfo_all_blocks=1 00:22:43.642 --rc geninfo_unexecuted_blocks=1 00:22:43.642 00:22:43.642 ' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.642 --rc genhtml_branch_coverage=1 00:22:43.642 --rc genhtml_function_coverage=1 00:22:43.642 --rc genhtml_legend=1 00:22:43.642 --rc geninfo_all_blocks=1 00:22:43.642 --rc geninfo_unexecuted_blocks=1 00:22:43.642 00:22:43.642 ' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.642 --rc genhtml_branch_coverage=1 00:22:43.642 --rc genhtml_function_coverage=1 00:22:43.642 --rc genhtml_legend=1 00:22:43.642 --rc geninfo_all_blocks=1 00:22:43.642 --rc geninfo_unexecuted_blocks=1 00:22:43.642 00:22:43.642 ' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:43.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.642 --rc genhtml_branch_coverage=1 00:22:43.642 --rc genhtml_function_coverage=1 00:22:43.642 --rc genhtml_legend=1 00:22:43.642 --rc geninfo_all_blocks=1 00:22:43.642 --rc geninfo_unexecuted_blocks=1 00:22:43.642 00:22:43.642 ' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.642 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.643 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.601 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:45.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:45.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:45.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:45.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:22:45.602 00:22:45.602 --- 10.0.0.2 ping statistics --- 00:22:45.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.602 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:22:45.602 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:45.602 00:22:45.603 --- 10.0.0.1 ping statistics --- 00:22:45.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.603 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3493157 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3493157 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3493157 ']' 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.603 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.603 [2024-11-09 23:56:11.653729] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:22:45.603 [2024-11-09 23:56:11.653879] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:45.861 [2024-11-09 23:56:11.825314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.861 [2024-11-09 23:56:11.979451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.861 [2024-11-09 23:56:11.979523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.861 [2024-11-09 23:56:11.979550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.861 [2024-11-09 23:56:11.979575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.861 [2024-11-09 23:56:11.979606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.861 [2024-11-09 23:56:11.981748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.861 [2024-11-09 23:56:11.981796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:45.861 [2024-11-09 23:56:11.981852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.861 [2024-11-09 23:56:11.981858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.796 [2024-11-09 23:56:12.668830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.796 Malloc0 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.796 [2024-11-09 23:56:12.760330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:46.796 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:46.797 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.797 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.797 { 00:22:46.797 "params": { 00:22:46.797 "name": "Nvme$subsystem", 00:22:46.797 "trtype": "$TEST_TRANSPORT", 00:22:46.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.797 "adrfam": "ipv4", 00:22:46.797 "trsvcid": "$NVMF_PORT", 00:22:46.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.797 "hdgst": ${hdgst:-false}, 00:22:46.797 "ddgst": ${ddgst:-false} 00:22:46.797 }, 00:22:46.797 "method": "bdev_nvme_attach_controller" 00:22:46.797 } 00:22:46.797 EOF 00:22:46.797 )") 00:22:46.797 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:46.797 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:46.797 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:46.797 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:46.797 "params": { 00:22:46.797 "name": "Nvme1", 00:22:46.797 "trtype": "tcp", 00:22:46.797 "traddr": "10.0.0.2", 00:22:46.797 "adrfam": "ipv4", 00:22:46.797 "trsvcid": "4420", 00:22:46.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.797 "hdgst": false, 00:22:46.797 "ddgst": false 00:22:46.797 }, 00:22:46.797 "method": "bdev_nvme_attach_controller" 00:22:46.797 }' 00:22:46.797 [2024-11-09 23:56:12.847447] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:22:46.797 [2024-11-09 23:56:12.847621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3493314 ] 00:22:47.055 [2024-11-09 23:56:13.000581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:47.055 [2024-11-09 23:56:13.144374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.055 [2024-11-09 23:56:13.144419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.055 [2024-11-09 23:56:13.144429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.621 I/O targets: 00:22:47.621 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:47.621 00:22:47.621 00:22:47.621 CUnit - A unit testing framework for C - Version 2.1-3 00:22:47.621 http://cunit.sourceforge.net/ 00:22:47.621 00:22:47.621 00:22:47.622 Suite: bdevio tests on: Nvme1n1 00:22:47.622 Test: blockdev write read block ...passed 00:22:47.880 Test: blockdev write zeroes read block ...passed 00:22:47.880 Test: blockdev write zeroes read no split ...passed 00:22:47.880 Test: blockdev write zeroes read split ...passed 00:22:47.880 Test: blockdev write zeroes read split partial ...passed 00:22:47.880 Test: blockdev reset ...[2024-11-09 23:56:13.883811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.880 [2024-11-09 23:56:13.883993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:47.880 [2024-11-09 23:56:13.994887] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:47.880 passed 00:22:47.880 Test: blockdev write read 8 blocks ...passed 00:22:47.880 Test: blockdev write read size > 128k ...passed 00:22:47.880 Test: blockdev write read invalid size ...passed 00:22:47.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:47.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:47.880 Test: blockdev write read max offset ...passed 00:22:48.138 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:48.138 Test: blockdev writev readv 8 blocks ...passed 00:22:48.138 Test: blockdev writev readv 30 x 1block ...passed 00:22:48.138 Test: blockdev writev readv block ...passed 00:22:48.138 Test: blockdev writev readv size > 128k ...passed 00:22:48.138 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:48.138 Test: blockdev comparev and writev ...[2024-11-09 23:56:14.210455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.210532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.210572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.210608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.211090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.211125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.211164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.211191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.211651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.211686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.211724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.211750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.212199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.212232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.212265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:48.138 [2024-11-09 23:56:14.212298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:48.138 passed 00:22:48.138 Test: blockdev nvme passthru rw ...passed 00:22:48.138 Test: blockdev nvme passthru vendor specific ...[2024-11-09 23:56:14.295002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:48.138 [2024-11-09 23:56:14.295064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.295311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:48.138 [2024-11-09 23:56:14.295345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:48.138 [2024-11-09 23:56:14.295550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:48.139 [2024-11-09 23:56:14.295582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:48.139 [2024-11-09 23:56:14.295801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:48.139 [2024-11-09 23:56:14.295832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:48.139 passed 00:22:48.139 Test: blockdev nvme admin passthru ...passed 00:22:48.397 Test: blockdev copy ...passed 00:22:48.397 00:22:48.397 Run Summary: Type Total Ran Passed Failed Inactive 00:22:48.397 suites 1 1 n/a 0 0 00:22:48.397 tests 23 23 23 0 0 00:22:48.397 asserts 152 152 152 0 n/a 00:22:48.397 00:22:48.397 Elapsed time = 1.250 seconds 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.964 rmmod nvme_tcp 00:22:48.964 rmmod nvme_fabrics 00:22:48.964 rmmod nvme_keyring 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3493157 ']' 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3493157 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3493157 ']' 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3493157 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3493157 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3493157' 00:22:48.964 killing process with pid 3493157 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3493157 00:22:48.964 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3493157 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.900 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.430 00:22:52.430 real 0m8.736s 00:22:52.430 user 0m20.595s 00:22:52.430 sys 0m2.847s 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.430 ************************************ 00:22:52.430 END TEST nvmf_bdevio_no_huge 00:22:52.430 ************************************ 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:52.430 ************************************ 00:22:52.430 START TEST nvmf_tls 00:22:52.430 ************************************ 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:52.430 * Looking for test storage... 00:22:52.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.430 --rc genhtml_branch_coverage=1 00:22:52.430 --rc genhtml_function_coverage=1 00:22:52.430 --rc genhtml_legend=1 00:22:52.430 --rc geninfo_all_blocks=1 00:22:52.430 --rc geninfo_unexecuted_blocks=1 00:22:52.430 00:22:52.430 ' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.430 --rc genhtml_branch_coverage=1 00:22:52.430 --rc genhtml_function_coverage=1 00:22:52.430 --rc genhtml_legend=1 00:22:52.430 --rc geninfo_all_blocks=1 00:22:52.430 --rc geninfo_unexecuted_blocks=1 00:22:52.430 00:22:52.430 ' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.430 --rc genhtml_branch_coverage=1 00:22:52.430 --rc genhtml_function_coverage=1 00:22:52.430 --rc genhtml_legend=1 00:22:52.430 --rc geninfo_all_blocks=1 00:22:52.430 --rc geninfo_unexecuted_blocks=1 00:22:52.430 00:22:52.430 ' 00:22:52.430 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:52.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.430 --rc genhtml_branch_coverage=1 00:22:52.430 --rc genhtml_function_coverage=1 00:22:52.430 --rc genhtml_legend=1 00:22:52.430 --rc geninfo_all_blocks=1 00:22:52.430 --rc geninfo_unexecuted_blocks=1 00:22:52.430 00:22:52.430 ' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.431 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:54.335 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:54.335 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:54.335 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:54.335 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:22:54.335 00:22:54.335 --- 10.0.0.2 ping statistics --- 00:22:54.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.335 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:22:54.335 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:54.335 00:22:54.335 --- 10.0.0.1 ping statistics --- 00:22:54.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.336 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3495535 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3495535 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3495535 ']' 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.336 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.336 [2024-11-09 23:56:20.510035] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:22:54.336 [2024-11-09 23:56:20.510169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.594 [2024-11-09 23:56:20.665927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.851 [2024-11-09 23:56:20.802348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.851 [2024-11-09 23:56:20.802428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.851 [2024-11-09 23:56:20.802466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.851 [2024-11-09 23:56:20.802505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.851 [2024-11-09 23:56:20.802536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.851 [2024-11-09 23:56:20.804298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:55.418 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:55.676 true 00:22:55.676 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.676 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:55.933 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:55.933 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:55.933 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:56.499 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:56.499 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:56.499 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:56.499 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:56.499 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:57.065 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:57.065 23:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:57.065 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:57.065 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:57.065 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:57.065 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:57.322 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:57.323 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:57.323 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:57.581 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:57.581 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:58.148 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:58.148 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:58.148 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:58.148 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:58.148 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:58.406 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:58.407 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.KQvZkzf5QJ 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ltpAvppU6T 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.KQvZkzf5QJ 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ltpAvppU6T 00:22:58.665 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:58.924 23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:59.490 23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.KQvZkzf5QJ 00:22:59.490 23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KQvZkzf5QJ 00:22:59.490 23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:00.056 [2024-11-09 23:56:25.974036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.056 23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:00.315 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:00.573 [2024-11-09 23:56:26.531528] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.573 [2024-11-09 23:56:26.531928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.573 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:00.831 malloc0 00:23:00.831 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.089 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KQvZkzf5QJ 00:23:01.347 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.913 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.KQvZkzf5QJ 00:23:11.881 Initializing NVMe Controllers 00:23:11.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.881 Initialization complete. Launching workers. 00:23:11.881 ======================================================== 00:23:11.881 Latency(us) 00:23:11.881 Device Information : IOPS MiB/s Average min max 00:23:11.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5580.10 21.80 11474.69 2353.06 14265.95 00:23:11.881 ======================================================== 00:23:11.881 Total : 5580.10 21.80 11474.69 2353.06 14265.95 00:23:11.881 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQvZkzf5QJ 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KQvZkzf5QJ 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3497683 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3497683 /var/tmp/bdevperf.sock 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3497683 ']' 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:11.881 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.139 [2024-11-09 23:56:38.161520] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:12.139 [2024-11-09 23:56:38.161671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497683 ] 00:23:12.139 [2024-11-09 23:56:38.296365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.397 [2024-11-09 23:56:38.415725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.331 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.331 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:13.331 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KQvZkzf5QJ 00:23:13.331 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.896 [2024-11-09 23:56:39.821800] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.896 TLSTESTn1 00:23:13.896 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.896 Running I/O for 10 seconds... 00:23:16.204 2391.00 IOPS, 9.34 MiB/s [2024-11-09T22:56:43.339Z] 2463.50 IOPS, 9.62 MiB/s [2024-11-09T22:56:44.274Z] 2481.67 IOPS, 9.69 MiB/s [2024-11-09T22:56:45.208Z] 2501.00 IOPS, 9.77 MiB/s [2024-11-09T22:56:46.142Z] 2510.20 IOPS, 9.81 MiB/s [2024-11-09T22:56:47.076Z] 2518.00 IOPS, 9.84 MiB/s [2024-11-09T22:56:48.451Z] 2522.43 IOPS, 9.85 MiB/s [2024-11-09T22:56:49.385Z] 2525.38 IOPS, 9.86 MiB/s [2024-11-09T22:56:50.339Z] 2529.22 IOPS, 9.88 MiB/s [2024-11-09T22:56:50.339Z] 2528.60 IOPS, 9.88 MiB/s 00:23:24.138 Latency(us) 00:23:24.138 [2024-11-09T22:56:50.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.138 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.138 Verification LBA range: start 0x0 length 0x2000 00:23:24.138 TLSTESTn1 : 10.03 2535.04 9.90 0.00 0.00 50399.49 9903.22 49516.09 00:23:24.138 [2024-11-09T22:56:50.339Z] =================================================================================================================== 00:23:24.138 [2024-11-09T22:56:50.339Z] Total : 2535.04 9.90 0.00 0.00 50399.49 9903.22 49516.09 00:23:24.138 { 00:23:24.138 "results": [ 00:23:24.138 { 00:23:24.138 "job": "TLSTESTn1", 00:23:24.138 "core_mask": "0x4", 00:23:24.138 "workload": "verify", 00:23:24.138 "status": "finished", 00:23:24.138 "verify_range": { 00:23:24.138 "start": 0, 00:23:24.138 "length": 8192 00:23:24.138 }, 00:23:24.138 "queue_depth": 128, 00:23:24.138 "io_size": 4096, 00:23:24.138 "runtime": 10.025086, 00:23:24.138 "iops": 2535.0405971579694, 00:23:24.138 "mibps": 9.902502332648318, 00:23:24.138 "io_failed": 0, 00:23:24.138 "io_timeout": 0, 00:23:24.138 "avg_latency_us": 50399.494165537224, 00:23:24.138 "min_latency_us": 9903.217777777778, 00:23:24.138 "max_latency_us": 49516.08888888889 00:23:24.138 } 00:23:24.138 ], 00:23:24.138 "core_count": 1 00:23:24.138 } 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3497683 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3497683 ']' 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3497683 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3497683 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3497683' 00:23:24.138 killing process with pid 3497683 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3497683 00:23:24.138 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.138 00:23:24.138 Latency(us) 00:23:24.138 [2024-11-09T22:56:50.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.138 [2024-11-09T22:56:50.339Z] =================================================================================================================== 00:23:24.138 [2024-11-09T22:56:50.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.138 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3497683 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltpAvppU6T 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltpAvppU6T 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltpAvppU6T 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ltpAvppU6T 00:23:25.095 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3499143 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3499143 /var/tmp/bdevperf.sock 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3499143 ']' 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.095 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.095 [2024-11-09 23:56:51.082810] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:25.095 [2024-11-09 23:56:51.082971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499143 ] 00:23:25.095 [2024-11-09 23:56:51.213564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.353 [2024-11-09 23:56:51.333844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.920 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.920 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:25.920 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ltpAvppU6T 00:23:26.178 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.436 [2024-11-09 23:56:52.575900] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.436 [2024-11-09 23:56:52.585328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:26.436 [2024-11-09 23:56:52.585334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:26.436 [2024-11-09 23:56:52.586289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:26.436 [2024-11-09 23:56:52.587290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:26.436 [2024-11-09 23:56:52.587319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:26.436 [2024-11-09 23:56:52.587344] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:26.436 [2024-11-09 23:56:52.587370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:26.436 request: 00:23:26.436 { 00:23:26.436 "name": "TLSTEST", 00:23:26.436 "trtype": "tcp", 00:23:26.436 "traddr": "10.0.0.2", 00:23:26.436 "adrfam": "ipv4", 00:23:26.436 "trsvcid": "4420", 00:23:26.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.436 "prchk_reftag": false, 00:23:26.436 "prchk_guard": false, 00:23:26.436 "hdgst": false, 00:23:26.436 "ddgst": false, 00:23:26.436 "psk": "key0", 00:23:26.436 "allow_unrecognized_csi": false, 00:23:26.436 "method": "bdev_nvme_attach_controller", 00:23:26.436 "req_id": 1 00:23:26.436 } 00:23:26.436 Got JSON-RPC error response 00:23:26.436 response: 00:23:26.436 { 00:23:26.436 "code": -5, 00:23:26.436 "message": "Input/output error" 00:23:26.436 } 00:23:26.436 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3499143 00:23:26.436 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3499143 ']' 00:23:26.436 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3499143 00:23:26.436 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:26.436 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:26.436 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3499143 00:23:26.695 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:26.695 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:26.695 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3499143' 00:23:26.695 killing process with pid 3499143 00:23:26.695 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3499143 00:23:26.695 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.695 00:23:26.695 Latency(us) 00:23:26.695 [2024-11-09T22:56:52.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.695 [2024-11-09T22:56:52.896Z] =================================================================================================================== 00:23:26.695 [2024-11-09T22:56:52.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.695 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3499143 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KQvZkzf5QJ 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KQvZkzf5QJ 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KQvZkzf5QJ 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KQvZkzf5QJ 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3499416 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3499416 /var/tmp/bdevperf.sock 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3499416 ']' 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.261 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.518 [2024-11-09 23:56:53.530481] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:27.519 [2024-11-09 23:56:53.530632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499416 ] 00:23:27.519 [2024-11-09 23:56:53.665399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.776 [2024-11-09 23:56:53.791289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.343 23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:28.343 23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:28.343 23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KQvZkzf5QJ 00:23:28.601 23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:28.858 [2024-11-09 23:56:55.053154] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.116 [2024-11-09 23:56:55.063643] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:29.116 [2024-11-09 23:56:55.063685] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:29.116 [2024-11-09 23:56:55.063782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.116 [2024-11-09 23:56:55.063832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:29.116 [2024-11-09 23:56:55.064767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:29.116 [2024-11-09 23:56:55.065770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:29.116 [2024-11-09 23:56:55.065805] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.116 [2024-11-09 23:56:55.065831] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:29.116 [2024-11-09 23:56:55.065857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:29.116 request: 00:23:29.116 { 00:23:29.116 "name": "TLSTEST", 00:23:29.116 "trtype": "tcp", 00:23:29.116 "traddr": "10.0.0.2", 00:23:29.116 "adrfam": "ipv4", 00:23:29.116 "trsvcid": "4420", 00:23:29.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.116 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.116 "prchk_reftag": false, 00:23:29.116 "prchk_guard": false, 00:23:29.116 "hdgst": false, 00:23:29.116 "ddgst": false, 00:23:29.116 "psk": "key0", 00:23:29.116 "allow_unrecognized_csi": false, 00:23:29.116 "method": "bdev_nvme_attach_controller", 00:23:29.116 "req_id": 1 00:23:29.116 } 00:23:29.116 Got JSON-RPC error response 00:23:29.116 response: 00:23:29.116 { 00:23:29.116 "code": -5, 00:23:29.116 "message": "Input/output error" 00:23:29.116 } 00:23:29.116 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3499416 00:23:29.116 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3499416 ']' 00:23:29.116 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3499416 00:23:29.116 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3499416 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3499416' 00:23:29.117 killing process with pid 3499416 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3499416 00:23:29.117 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.117 00:23:29.117 Latency(us) 00:23:29.117 [2024-11-09T22:56:55.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.117 [2024-11-09T22:56:55.318Z] =================================================================================================================== 00:23:29.117 [2024-11-09T22:56:55.318Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.117 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3499416 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQvZkzf5QJ 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQvZkzf5QJ 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQvZkzf5QJ 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KQvZkzf5QJ 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3499692 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3499692 /var/tmp/bdevperf.sock 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3499692 ']' 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.051 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.051 [2024-11-09 23:56:56.004670] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:30.051 [2024-11-09 23:56:56.004810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499692 ] 00:23:30.051 [2024-11-09 23:56:56.153069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.309 [2024-11-09 23:56:56.279391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.875 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.875 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:30.875 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KQvZkzf5QJ 00:23:31.440 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.440 [2024-11-09 23:56:57.611748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.440 [2024-11-09 23:56:57.621341] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:31.440 [2024-11-09 23:56:57.621380] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:31.440 [2024-11-09 23:56:57.621454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:31.440 [2024-11-09 23:56:57.622454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:31.440 [2024-11-09 23:56:57.623433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:31.440 [2024-11-09 23:56:57.624427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:31.440 [2024-11-09 23:56:57.624460] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:31.440 [2024-11-09 23:56:57.624482] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:31.440 [2024-11-09 23:56:57.624513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:31.440 request: 00:23:31.440 { 00:23:31.440 "name": "TLSTEST", 00:23:31.440 "trtype": "tcp", 00:23:31.440 "traddr": "10.0.0.2", 00:23:31.440 "adrfam": "ipv4", 00:23:31.440 "trsvcid": "4420", 00:23:31.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.440 "prchk_reftag": false, 00:23:31.440 "prchk_guard": false, 00:23:31.440 "hdgst": false, 00:23:31.440 "ddgst": false, 00:23:31.440 "psk": "key0", 00:23:31.440 "allow_unrecognized_csi": false, 00:23:31.440 "method": "bdev_nvme_attach_controller", 00:23:31.440 "req_id": 1 00:23:31.440 } 00:23:31.440 Got JSON-RPC error response 00:23:31.440 response: 00:23:31.440 { 00:23:31.440 "code": -5, 00:23:31.440 "message": "Input/output error" 00:23:31.440 } 00:23:31.440 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3499692 00:23:31.440 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3499692 ']' 00:23:31.440 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3499692 00:23:31.440 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3499692 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3499692' 00:23:31.699 killing process with pid 3499692 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3499692 00:23:31.699 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.699 00:23:31.699 Latency(us) 00:23:31.699 [2024-11-09T22:56:57.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.699 [2024-11-09T22:56:57.900Z] =================================================================================================================== 00:23:31.699 [2024-11-09T22:56:57.900Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.699 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3499692 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3500089 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3500089 /var/tmp/bdevperf.sock 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3500089 ']' 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.265 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.524 [2024-11-09 23:56:58.528034] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:32.524 [2024-11-09 23:56:58.528170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500089 ] 00:23:32.524 [2024-11-09 23:56:58.663929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.783 [2024-11-09 23:56:58.787553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.717 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:33.717 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:33.717 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:33.717 [2024-11-09 23:56:59.801547] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:33.717 [2024-11-09 23:56:59.801649] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:33.717 request: 00:23:33.717 { 00:23:33.717 "name": "key0", 00:23:33.717 "path": "", 00:23:33.717 "method": "keyring_file_add_key", 00:23:33.717 "req_id": 1 00:23:33.717 } 00:23:33.717 Got JSON-RPC error response 00:23:33.717 response: 00:23:33.717 { 00:23:33.717 "code": -1, 00:23:33.717 "message": "Operation not permitted" 00:23:33.717 } 00:23:33.717 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.974 [2024-11-09 23:57:00.078506] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.974 [2024-11-09 23:57:00.078628] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:33.974 request: 00:23:33.974 { 00:23:33.974 "name": "TLSTEST", 00:23:33.974 "trtype": "tcp", 00:23:33.974 "traddr": "10.0.0.2", 00:23:33.974 "adrfam": "ipv4", 00:23:33.974 "trsvcid": "4420", 00:23:33.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.974 "prchk_reftag": false, 00:23:33.974 "prchk_guard": false, 00:23:33.974 "hdgst": false, 00:23:33.974 "ddgst": false, 00:23:33.974 "psk": "key0", 00:23:33.974 "allow_unrecognized_csi": false, 00:23:33.974 "method": "bdev_nvme_attach_controller", 00:23:33.974 "req_id": 1 00:23:33.974 } 00:23:33.974 Got JSON-RPC error response 00:23:33.974 response: 00:23:33.974 { 00:23:33.974 "code": -126, 00:23:33.974 "message": "Required key not available" 00:23:33.974 } 00:23:33.974 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3500089 00:23:33.974 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3500089 ']' 00:23:33.974 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3500089 00:23:33.974 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:33.974 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:33.974 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3500089 00:23:33.975 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:33.975 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:33.975 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3500089' 00:23:33.975 killing process with pid 3500089 00:23:33.975 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3500089 00:23:33.975 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.975 00:23:33.975 Latency(us) 00:23:33.975 [2024-11-09T22:57:00.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.975 [2024-11-09T22:57:00.176Z] =================================================================================================================== 00:23:33.975 [2024-11-09T22:57:00.176Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.975 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3500089 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3495535 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3495535 ']' 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3495535 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3495535 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3495535' 00:23:34.908 killing process with pid 3495535 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3495535 00:23:34.908 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3495535 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.WSVA3YIoA1 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.WSVA3YIoA1 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3500626 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3500626 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3500626 ']' 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.283 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.283 [2024-11-09 23:57:02.439171] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:36.283 [2024-11-09 23:57:02.439337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.541 [2024-11-09 23:57:02.592879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.541 [2024-11-09 23:57:02.731310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.541 [2024-11-09 23:57:02.731398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.541 [2024-11-09 23:57:02.731439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.541 [2024-11-09 23:57:02.731481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.541 [2024-11-09 23:57:02.731514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.541 [2024-11-09 23:57:02.733281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.WSVA3YIoA1 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WSVA3YIoA1 00:23:37.473 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.729 [2024-11-09 23:57:03.716153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.729 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.985 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:38.242 [2024-11-09 23:57:04.345894] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.242 [2024-11-09 23:57:04.346277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.242 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.499 malloc0 00:23:38.499 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.756 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WSVA3YIoA1 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WSVA3YIoA1 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3501043 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3501043 /var/tmp/bdevperf.sock 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3501043 ']' 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.322 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.580 [2024-11-09 23:57:05.575229] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:39.580 [2024-11-09 23:57:05.575361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501043 ] 00:23:39.580 [2024-11-09 23:57:05.712361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.839 [2024-11-09 23:57:05.840055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.404 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:40.404 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:40.404 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:23:40.662 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.920 [2024-11-09 23:57:07.078164] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.179 TLSTESTn1 00:23:41.179 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.179 Running I/O for 10 seconds... 00:23:43.488 2590.00 IOPS, 10.12 MiB/s [2024-11-09T22:57:10.621Z] 2604.50 IOPS, 10.17 MiB/s [2024-11-09T22:57:11.554Z] 2618.00 IOPS, 10.23 MiB/s [2024-11-09T22:57:12.487Z] 2634.00 IOPS, 10.29 MiB/s [2024-11-09T22:57:13.420Z] 2630.60 IOPS, 10.28 MiB/s [2024-11-09T22:57:14.354Z] 2634.33 IOPS, 10.29 MiB/s [2024-11-09T22:57:15.728Z] 2630.71 IOPS, 10.28 MiB/s [2024-11-09T22:57:16.663Z] 2633.62 IOPS, 10.29 MiB/s [2024-11-09T22:57:17.597Z] 2638.89 IOPS, 10.31 MiB/s [2024-11-09T22:57:17.597Z] 2642.20 IOPS, 10.32 MiB/s 00:23:51.396 Latency(us) 00:23:51.396 [2024-11-09T22:57:17.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.396 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.396 Verification LBA range: start 0x0 length 0x2000 00:23:51.396 TLSTESTn1 : 10.03 2646.39 10.34 0.00 0.00 48265.49 8786.68 35146.71 00:23:51.396 [2024-11-09T22:57:17.597Z] =================================================================================================================== 00:23:51.396 [2024-11-09T22:57:17.597Z] Total : 2646.39 10.34 0.00 0.00 48265.49 8786.68 35146.71 00:23:51.396 { 00:23:51.396 "results": [ 00:23:51.396 { 00:23:51.396 "job": "TLSTESTn1", 00:23:51.396 "core_mask": "0x4", 00:23:51.396 "workload": "verify", 00:23:51.396 "status": "finished", 00:23:51.396 "verify_range": { 00:23:51.396 "start": 0, 00:23:51.396 "length": 8192 00:23:51.396 }, 00:23:51.396 "queue_depth": 128, 00:23:51.396 "io_size": 4096, 00:23:51.396 "runtime": 10.032534, 00:23:51.396 "iops": 2646.3902340126633, 00:23:51.396 "mibps": 10.337461851611966, 00:23:51.396 "io_failed": 0, 00:23:51.396 "io_timeout": 0, 00:23:51.396 "avg_latency_us": 48265.48903809723, 00:23:51.396 "min_latency_us": 8786.678518518518, 00:23:51.396 "max_latency_us": 35146.71407407407 00:23:51.396 } 00:23:51.396 ], 00:23:51.396 "core_count": 1 00:23:51.396 } 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3501043 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3501043 ']' 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3501043 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3501043 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3501043' 00:23:51.396 killing process with pid 3501043 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3501043 00:23:51.396 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.396 00:23:51.396 Latency(us) 00:23:51.396 [2024-11-09T22:57:17.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.396 [2024-11-09T22:57:17.597Z] =================================================================================================================== 00:23:51.396 [2024-11-09T22:57:17.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.396 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3501043 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.WSVA3YIoA1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WSVA3YIoA1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WSVA3YIoA1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WSVA3YIoA1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WSVA3YIoA1 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3502998 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3502998 /var/tmp/bdevperf.sock 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3502998 ']' 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.332 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.332 [2024-11-09 23:57:18.347009] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:52.332 [2024-11-09 23:57:18.347147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502998 ] 00:23:52.332 [2024-11-09 23:57:18.478432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.590 [2024-11-09 23:57:18.597688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.155 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.155 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:53.155 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:23:53.413 [2024-11-09 23:57:19.581057] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WSVA3YIoA1': 0100666 00:23:53.413 [2024-11-09 23:57:19.581114] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:53.413 request: 00:23:53.413 { 00:23:53.413 "name": "key0", 00:23:53.413 "path": "/tmp/tmp.WSVA3YIoA1", 00:23:53.413 "method": "keyring_file_add_key", 00:23:53.413 "req_id": 1 00:23:53.413 } 00:23:53.413 Got JSON-RPC error response 00:23:53.413 response: 00:23:53.413 { 00:23:53.413 "code": -1, 00:23:53.413 "message": "Operation not permitted" 00:23:53.413 } 00:23:53.413 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.670 [2024-11-09 23:57:19.841888] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.670 [2024-11-09 23:57:19.841966] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:53.670 request: 00:23:53.670 { 00:23:53.670 "name": "TLSTEST", 00:23:53.670 "trtype": "tcp", 00:23:53.670 "traddr": "10.0.0.2", 00:23:53.670 "adrfam": "ipv4", 00:23:53.670 "trsvcid": "4420", 00:23:53.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.670 "prchk_reftag": false, 00:23:53.670 "prchk_guard": false, 00:23:53.670 "hdgst": false, 00:23:53.670 "ddgst": false, 00:23:53.670 "psk": "key0", 00:23:53.670 "allow_unrecognized_csi": false, 00:23:53.670 "method": "bdev_nvme_attach_controller", 00:23:53.670 "req_id": 1 00:23:53.670 } 00:23:53.670 Got JSON-RPC error response 00:23:53.670 response: 00:23:53.670 { 00:23:53.670 "code": -126, 00:23:53.670 "message": "Required key not available" 00:23:53.670 } 00:23:53.670 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3502998 00:23:53.670 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3502998 ']' 00:23:53.670 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3502998 00:23:53.670 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:53.670 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:53.670 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3502998 00:23:53.927 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:53.927 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:53.927 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3502998' 00:23:53.927 killing process with pid 3502998 00:23:53.927 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3502998 00:23:53.927 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.927 00:23:53.927 Latency(us) 00:23:53.927 [2024-11-09T22:57:20.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.927 [2024-11-09T22:57:20.128Z] =================================================================================================================== 00:23:53.927 [2024-11-09T22:57:20.128Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.927 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3502998 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3500626 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3500626 ']' 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3500626 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3500626 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3500626' 00:23:54.861 killing process with pid 3500626 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3500626 00:23:54.861 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3500626 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3503423 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3503423 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3503423 ']' 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.283 [2024-11-09 23:57:22.172718] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:23:56.283 [2024-11-09 23:57:22.172872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.283 [2024-11-09 23:57:22.337614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.283 [2024-11-09 23:57:22.474096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.283 [2024-11-09 23:57:22.474197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.283 [2024-11-09 23:57:22.474234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.283 [2024-11-09 23:57:22.474285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.283 [2024-11-09 23:57:22.474319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.283 [2024-11-09 23:57:22.476087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.WSVA3YIoA1 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WSVA3YIoA1 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.217 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:57.218 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.218 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.WSVA3YIoA1 00:23:57.218 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WSVA3YIoA1 00:23:57.218 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.218 [2024-11-09 23:57:23.416851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.475 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:57.733 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:57.991 [2024-11-09 23:57:23.954205] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.991 [2024-11-09 23:57:23.954600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.991 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.249 malloc0 00:23:58.249 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.507 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:23:58.765 [2024-11-09 23:57:24.777870] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WSVA3YIoA1': 0100666 00:23:58.765 [2024-11-09 23:57:24.777976] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:58.765 request: 00:23:58.765 { 00:23:58.765 "name": "key0", 00:23:58.765 "path": "/tmp/tmp.WSVA3YIoA1", 00:23:58.765 "method": "keyring_file_add_key", 00:23:58.765 "req_id": 1 00:23:58.765 } 00:23:58.765 Got JSON-RPC error response 00:23:58.765 response: 00:23:58.765 { 00:23:58.765 "code": -1, 00:23:58.765 "message": "Operation not permitted" 00:23:58.765 } 00:23:58.765 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.023 [2024-11-09 23:57:25.046662] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:59.023 [2024-11-09 23:57:25.046738] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:59.023 request: 00:23:59.023 { 00:23:59.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.023 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.023 "psk": "key0", 00:23:59.023 "method": "nvmf_subsystem_add_host", 00:23:59.023 "req_id": 1 00:23:59.023 } 00:23:59.023 Got JSON-RPC error response 00:23:59.023 response: 00:23:59.023 { 00:23:59.023 "code": -32603, 00:23:59.023 "message": "Internal error" 00:23:59.023 } 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3503423 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3503423 ']' 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3503423 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3503423 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3503423' 00:23:59.023 killing process with pid 3503423 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3503423 00:23:59.023 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3503423 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.WSVA3YIoA1 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3503980 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3503980 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3503980 ']' 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:00.397 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.397 [2024-11-09 23:57:26.453735] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:00.397 [2024-11-09 23:57:26.453885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.397 [2024-11-09 23:57:26.593451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.656 [2024-11-09 23:57:26.712667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.656 [2024-11-09 23:57:26.712746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.656 [2024-11-09 23:57:26.712780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.656 [2024-11-09 23:57:26.712827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.656 [2024-11-09 23:57:26.712855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.656 [2024-11-09 23:57:26.714475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.WSVA3YIoA1 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WSVA3YIoA1 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:01.590 [2024-11-09 23:57:27.702952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.590 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:01.847 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:02.414 [2024-11-09 23:57:28.320800] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.414 [2024-11-09 23:57:28.321203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.414 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:02.671 malloc0 00:24:02.671 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:02.930 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:24:03.188 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3504394 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3504394 /var/tmp/bdevperf.sock 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3504394 ']' 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.447 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.447 [2024-11-09 23:57:29.522679] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:03.447 [2024-11-09 23:57:29.522815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504394 ] 00:24:03.704 [2024-11-09 23:57:29.657844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.704 [2024-11-09 23:57:29.778363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.636 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:04.636 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:04.636 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:24:04.893 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:04.893 [2024-11-09 23:57:31.092950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.150 TLSTESTn1 00:24:05.150 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:05.408 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:05.408 "subsystems": [ 00:24:05.408 { 00:24:05.408 "subsystem": "keyring", 00:24:05.408 "config": [ 00:24:05.408 { 00:24:05.408 "method": "keyring_file_add_key", 00:24:05.408 "params": { 00:24:05.408 "name": "key0", 00:24:05.408 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:05.408 } 00:24:05.408 } 00:24:05.408 ] 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "subsystem": "iobuf", 00:24:05.408 "config": [ 00:24:05.408 { 00:24:05.408 "method": "iobuf_set_options", 00:24:05.408 "params": { 00:24:05.408 "small_pool_count": 8192, 00:24:05.408 "large_pool_count": 1024, 00:24:05.408 "small_bufsize": 8192, 00:24:05.408 "large_bufsize": 135168, 00:24:05.408 "enable_numa": false 00:24:05.408 } 00:24:05.408 } 00:24:05.408 ] 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "subsystem": "sock", 00:24:05.408 "config": [ 00:24:05.408 { 00:24:05.408 "method": "sock_set_default_impl", 00:24:05.408 "params": { 00:24:05.408 "impl_name": "posix" 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "sock_impl_set_options", 00:24:05.408 "params": { 00:24:05.408 "impl_name": "ssl", 00:24:05.408 "recv_buf_size": 4096, 00:24:05.408 "send_buf_size": 4096, 00:24:05.408 "enable_recv_pipe": true, 00:24:05.408 "enable_quickack": false, 00:24:05.408 "enable_placement_id": 0, 00:24:05.408 "enable_zerocopy_send_server": true, 00:24:05.408 "enable_zerocopy_send_client": false, 00:24:05.408 "zerocopy_threshold": 0, 00:24:05.408 "tls_version": 0, 00:24:05.408 "enable_ktls": false 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "sock_impl_set_options", 00:24:05.408 "params": { 00:24:05.408 "impl_name": "posix", 00:24:05.408 "recv_buf_size": 2097152, 00:24:05.408 "send_buf_size": 2097152, 00:24:05.408 "enable_recv_pipe": true, 00:24:05.408 "enable_quickack": false, 00:24:05.408 "enable_placement_id": 0, 00:24:05.408 "enable_zerocopy_send_server": true, 00:24:05.408 "enable_zerocopy_send_client": false, 00:24:05.408 "zerocopy_threshold": 0, 00:24:05.408 "tls_version": 0, 00:24:05.408 "enable_ktls": false 00:24:05.408 } 00:24:05.408 } 00:24:05.408 ] 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "subsystem": "vmd", 00:24:05.408 "config": [] 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "subsystem": "accel", 00:24:05.408 "config": [ 00:24:05.408 { 00:24:05.408 "method": "accel_set_options", 00:24:05.408 "params": { 00:24:05.408 "small_cache_size": 128, 00:24:05.408 "large_cache_size": 16, 00:24:05.408 "task_count": 2048, 00:24:05.408 "sequence_count": 2048, 00:24:05.408 "buf_count": 2048 00:24:05.408 } 00:24:05.408 } 00:24:05.408 ] 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "subsystem": "bdev", 00:24:05.408 "config": [ 00:24:05.408 { 00:24:05.408 "method": "bdev_set_options", 00:24:05.408 "params": { 00:24:05.408 "bdev_io_pool_size": 65535, 00:24:05.408 "bdev_io_cache_size": 256, 00:24:05.408 "bdev_auto_examine": true, 00:24:05.408 "iobuf_small_cache_size": 128, 00:24:05.408 "iobuf_large_cache_size": 16 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "bdev_raid_set_options", 00:24:05.408 "params": { 00:24:05.408 "process_window_size_kb": 1024, 00:24:05.408 "process_max_bandwidth_mb_sec": 0 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "bdev_iscsi_set_options", 00:24:05.408 "params": { 00:24:05.408 "timeout_sec": 30 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "bdev_nvme_set_options", 00:24:05.408 "params": { 00:24:05.408 "action_on_timeout": "none", 00:24:05.408 "timeout_us": 0, 00:24:05.408 "timeout_admin_us": 0, 00:24:05.408 "keep_alive_timeout_ms": 10000, 00:24:05.408 "arbitration_burst": 0, 00:24:05.408 "low_priority_weight": 0, 00:24:05.408 "medium_priority_weight": 0, 00:24:05.408 "high_priority_weight": 0, 00:24:05.408 "nvme_adminq_poll_period_us": 10000, 00:24:05.408 "nvme_ioq_poll_period_us": 0, 00:24:05.408 "io_queue_requests": 0, 00:24:05.408 "delay_cmd_submit": true, 00:24:05.408 "transport_retry_count": 4, 00:24:05.408 "bdev_retry_count": 3, 00:24:05.408 "transport_ack_timeout": 0, 00:24:05.408 "ctrlr_loss_timeout_sec": 0, 00:24:05.408 "reconnect_delay_sec": 0, 00:24:05.408 "fast_io_fail_timeout_sec": 0, 00:24:05.408 "disable_auto_failback": false, 00:24:05.408 "generate_uuids": false, 00:24:05.408 "transport_tos": 0, 00:24:05.408 "nvme_error_stat": false, 00:24:05.408 "rdma_srq_size": 0, 00:24:05.408 "io_path_stat": false, 00:24:05.408 "allow_accel_sequence": false, 00:24:05.408 "rdma_max_cq_size": 0, 00:24:05.408 "rdma_cm_event_timeout_ms": 0, 00:24:05.408 "dhchap_digests": [ 00:24:05.408 "sha256", 00:24:05.408 "sha384", 00:24:05.408 "sha512" 00:24:05.408 ], 00:24:05.408 "dhchap_dhgroups": [ 00:24:05.408 "null", 00:24:05.408 "ffdhe2048", 00:24:05.408 "ffdhe3072", 00:24:05.408 "ffdhe4096", 00:24:05.408 "ffdhe6144", 00:24:05.408 "ffdhe8192" 00:24:05.408 ] 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "bdev_nvme_set_hotplug", 00:24:05.408 "params": { 00:24:05.408 "period_us": 100000, 00:24:05.408 "enable": false 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "bdev_malloc_create", 00:24:05.408 "params": { 00:24:05.408 "name": "malloc0", 00:24:05.408 "num_blocks": 8192, 00:24:05.408 "block_size": 4096, 00:24:05.408 "physical_block_size": 4096, 00:24:05.408 "uuid": "72c7ffdd-60f2-4e1f-83a0-c56ebbb10cec", 00:24:05.408 "optimal_io_boundary": 0, 00:24:05.408 "md_size": 0, 00:24:05.408 "dif_type": 0, 00:24:05.408 "dif_is_head_of_md": false, 00:24:05.408 "dif_pi_format": 0 00:24:05.408 } 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "method": "bdev_wait_for_examine" 00:24:05.408 } 00:24:05.408 ] 00:24:05.408 }, 00:24:05.408 { 00:24:05.408 "subsystem": "nbd", 00:24:05.408 "config": [] 00:24:05.408 }, 00:24:05.408 { 00:24:05.409 "subsystem": "scheduler", 00:24:05.409 "config": [ 00:24:05.409 { 00:24:05.409 "method": "framework_set_scheduler", 00:24:05.409 "params": { 00:24:05.409 "name": "static" 00:24:05.409 } 00:24:05.409 } 00:24:05.409 ] 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "subsystem": "nvmf", 00:24:05.409 "config": [ 00:24:05.409 { 00:24:05.409 "method": "nvmf_set_config", 00:24:05.409 "params": { 00:24:05.409 "discovery_filter": "match_any", 00:24:05.409 "admin_cmd_passthru": { 00:24:05.409 "identify_ctrlr": false 00:24:05.409 }, 00:24:05.409 "dhchap_digests": [ 00:24:05.409 "sha256", 00:24:05.409 "sha384", 00:24:05.409 "sha512" 00:24:05.409 ], 00:24:05.409 "dhchap_dhgroups": [ 00:24:05.409 "null", 00:24:05.409 "ffdhe2048", 00:24:05.409 "ffdhe3072", 00:24:05.409 "ffdhe4096", 00:24:05.409 "ffdhe6144", 00:24:05.409 "ffdhe8192" 00:24:05.409 ] 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_set_max_subsystems", 00:24:05.409 "params": { 00:24:05.409 "max_subsystems": 1024 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_set_crdt", 00:24:05.409 "params": { 00:24:05.409 "crdt1": 0, 00:24:05.409 "crdt2": 0, 00:24:05.409 "crdt3": 0 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_create_transport", 00:24:05.409 "params": { 00:24:05.409 "trtype": "TCP", 00:24:05.409 "max_queue_depth": 128, 00:24:05.409 "max_io_qpairs_per_ctrlr": 127, 00:24:05.409 "in_capsule_data_size": 4096, 00:24:05.409 "max_io_size": 131072, 00:24:05.409 "io_unit_size": 131072, 00:24:05.409 "max_aq_depth": 128, 00:24:05.409 "num_shared_buffers": 511, 00:24:05.409 "buf_cache_size": 4294967295, 00:24:05.409 "dif_insert_or_strip": false, 00:24:05.409 "zcopy": false, 00:24:05.409 "c2h_success": false, 00:24:05.409 "sock_priority": 0, 00:24:05.409 "abort_timeout_sec": 1, 00:24:05.409 "ack_timeout": 0, 00:24:05.409 "data_wr_pool_size": 0 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_create_subsystem", 00:24:05.409 "params": { 00:24:05.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.409 "allow_any_host": false, 00:24:05.409 "serial_number": "SPDK00000000000001", 00:24:05.409 "model_number": "SPDK bdev Controller", 00:24:05.409 "max_namespaces": 10, 00:24:05.409 "min_cntlid": 1, 00:24:05.409 "max_cntlid": 65519, 00:24:05.409 "ana_reporting": false 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_subsystem_add_host", 00:24:05.409 "params": { 00:24:05.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.409 "host": "nqn.2016-06.io.spdk:host1", 00:24:05.409 "psk": "key0" 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_subsystem_add_ns", 00:24:05.409 "params": { 00:24:05.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.409 "namespace": { 00:24:05.409 "nsid": 1, 00:24:05.409 "bdev_name": "malloc0", 00:24:05.409 "nguid": "72C7FFDD60F24E1F83A0C56EBBB10CEC", 00:24:05.409 "uuid": "72c7ffdd-60f2-4e1f-83a0-c56ebbb10cec", 00:24:05.409 "no_auto_visible": false 00:24:05.409 } 00:24:05.409 } 00:24:05.409 }, 00:24:05.409 { 00:24:05.409 "method": "nvmf_subsystem_add_listener", 00:24:05.409 "params": { 00:24:05.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.409 "listen_address": { 00:24:05.409 "trtype": "TCP", 00:24:05.409 "adrfam": "IPv4", 00:24:05.409 "traddr": "10.0.0.2", 00:24:05.409 "trsvcid": "4420" 00:24:05.409 }, 00:24:05.409 "secure_channel": true 00:24:05.409 } 00:24:05.409 } 00:24:05.409 ] 00:24:05.409 } 00:24:05.409 ] 00:24:05.409 }' 00:24:05.409 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:05.975 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:05.975 "subsystems": [ 00:24:05.975 { 00:24:05.975 "subsystem": "keyring", 00:24:05.975 "config": [ 00:24:05.975 { 00:24:05.975 "method": "keyring_file_add_key", 00:24:05.975 "params": { 00:24:05.975 "name": "key0", 00:24:05.975 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:05.975 } 00:24:05.975 } 00:24:05.975 ] 00:24:05.975 }, 00:24:05.975 { 00:24:05.975 "subsystem": "iobuf", 00:24:05.975 "config": [ 00:24:05.975 { 00:24:05.975 "method": "iobuf_set_options", 00:24:05.975 "params": { 00:24:05.975 "small_pool_count": 8192, 00:24:05.975 "large_pool_count": 1024, 00:24:05.975 "small_bufsize": 8192, 00:24:05.975 "large_bufsize": 135168, 00:24:05.975 "enable_numa": false 00:24:05.975 } 00:24:05.975 } 00:24:05.975 ] 00:24:05.975 }, 00:24:05.975 { 00:24:05.975 "subsystem": "sock", 00:24:05.975 "config": [ 00:24:05.975 { 00:24:05.975 "method": "sock_set_default_impl", 00:24:05.975 "params": { 00:24:05.975 "impl_name": "posix" 00:24:05.975 } 00:24:05.975 }, 00:24:05.975 { 00:24:05.975 "method": "sock_impl_set_options", 00:24:05.975 "params": { 00:24:05.975 "impl_name": "ssl", 00:24:05.975 "recv_buf_size": 4096, 00:24:05.975 "send_buf_size": 4096, 00:24:05.975 "enable_recv_pipe": true, 00:24:05.975 "enable_quickack": false, 00:24:05.975 "enable_placement_id": 0, 00:24:05.975 "enable_zerocopy_send_server": true, 00:24:05.975 "enable_zerocopy_send_client": false, 00:24:05.975 "zerocopy_threshold": 0, 00:24:05.975 "tls_version": 0, 00:24:05.975 "enable_ktls": false 00:24:05.975 } 00:24:05.975 }, 00:24:05.975 { 00:24:05.975 "method": "sock_impl_set_options", 00:24:05.975 "params": { 00:24:05.975 "impl_name": "posix", 00:24:05.975 "recv_buf_size": 2097152, 00:24:05.975 "send_buf_size": 2097152, 00:24:05.975 "enable_recv_pipe": true, 00:24:05.975 "enable_quickack": false, 00:24:05.975 "enable_placement_id": 0, 00:24:05.975 "enable_zerocopy_send_server": true, 00:24:05.975 "enable_zerocopy_send_client": false, 00:24:05.975 "zerocopy_threshold": 0, 00:24:05.975 "tls_version": 0, 00:24:05.975 "enable_ktls": false 00:24:05.975 } 00:24:05.975 } 00:24:05.975 ] 00:24:05.975 }, 00:24:05.975 { 00:24:05.975 "subsystem": "vmd", 00:24:05.975 "config": [] 00:24:05.975 }, 00:24:05.975 { 00:24:05.975 "subsystem": "accel", 00:24:05.976 "config": [ 00:24:05.976 { 00:24:05.976 "method": "accel_set_options", 00:24:05.976 "params": { 00:24:05.976 "small_cache_size": 128, 00:24:05.976 "large_cache_size": 16, 00:24:05.976 "task_count": 2048, 00:24:05.976 "sequence_count": 2048, 00:24:05.976 "buf_count": 2048 00:24:05.976 } 00:24:05.976 } 00:24:05.976 ] 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "subsystem": "bdev", 00:24:05.976 "config": [ 00:24:05.976 { 00:24:05.976 "method": "bdev_set_options", 00:24:05.976 "params": { 00:24:05.976 "bdev_io_pool_size": 65535, 00:24:05.976 "bdev_io_cache_size": 256, 00:24:05.976 "bdev_auto_examine": true, 00:24:05.976 "iobuf_small_cache_size": 128, 00:24:05.976 "iobuf_large_cache_size": 16 00:24:05.976 } 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "method": "bdev_raid_set_options", 00:24:05.976 "params": { 00:24:05.976 "process_window_size_kb": 1024, 00:24:05.976 "process_max_bandwidth_mb_sec": 0 00:24:05.976 } 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "method": "bdev_iscsi_set_options", 00:24:05.976 "params": { 00:24:05.976 "timeout_sec": 30 00:24:05.976 } 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "method": "bdev_nvme_set_options", 00:24:05.976 "params": { 00:24:05.976 "action_on_timeout": "none", 00:24:05.976 "timeout_us": 0, 00:24:05.976 "timeout_admin_us": 0, 00:24:05.976 "keep_alive_timeout_ms": 10000, 00:24:05.976 "arbitration_burst": 0, 00:24:05.976 "low_priority_weight": 0, 00:24:05.976 "medium_priority_weight": 0, 00:24:05.976 "high_priority_weight": 0, 00:24:05.976 "nvme_adminq_poll_period_us": 10000, 00:24:05.976 "nvme_ioq_poll_period_us": 0, 00:24:05.976 "io_queue_requests": 512, 00:24:05.976 "delay_cmd_submit": true, 00:24:05.976 "transport_retry_count": 4, 00:24:05.976 "bdev_retry_count": 3, 00:24:05.976 "transport_ack_timeout": 0, 00:24:05.976 "ctrlr_loss_timeout_sec": 0, 00:24:05.976 "reconnect_delay_sec": 0, 00:24:05.976 "fast_io_fail_timeout_sec": 0, 00:24:05.976 "disable_auto_failback": false, 00:24:05.976 "generate_uuids": false, 00:24:05.976 "transport_tos": 0, 00:24:05.976 "nvme_error_stat": false, 00:24:05.976 "rdma_srq_size": 0, 00:24:05.976 "io_path_stat": false, 00:24:05.976 "allow_accel_sequence": false, 00:24:05.976 "rdma_max_cq_size": 0, 00:24:05.976 "rdma_cm_event_timeout_ms": 0, 00:24:05.976 "dhchap_digests": [ 00:24:05.976 "sha256", 00:24:05.976 "sha384", 00:24:05.976 "sha512" 00:24:05.976 ], 00:24:05.976 "dhchap_dhgroups": [ 00:24:05.976 "null", 00:24:05.976 "ffdhe2048", 00:24:05.976 "ffdhe3072", 00:24:05.976 "ffdhe4096", 00:24:05.976 "ffdhe6144", 00:24:05.976 "ffdhe8192" 00:24:05.976 ] 00:24:05.976 } 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "method": "bdev_nvme_attach_controller", 00:24:05.976 "params": { 00:24:05.976 "name": "TLSTEST", 00:24:05.976 "trtype": "TCP", 00:24:05.976 "adrfam": "IPv4", 00:24:05.976 "traddr": "10.0.0.2", 00:24:05.976 "trsvcid": "4420", 00:24:05.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.976 "prchk_reftag": false, 00:24:05.976 "prchk_guard": false, 00:24:05.976 "ctrlr_loss_timeout_sec": 0, 00:24:05.976 "reconnect_delay_sec": 0, 00:24:05.976 "fast_io_fail_timeout_sec": 0, 00:24:05.976 "psk": "key0", 00:24:05.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.976 "hdgst": false, 00:24:05.976 "ddgst": false, 00:24:05.976 "multipath": "multipath" 00:24:05.976 } 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "method": "bdev_nvme_set_hotplug", 00:24:05.976 "params": { 00:24:05.976 "period_us": 100000, 00:24:05.976 "enable": false 00:24:05.976 } 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "method": "bdev_wait_for_examine" 00:24:05.976 } 00:24:05.976 ] 00:24:05.976 }, 00:24:05.976 { 00:24:05.976 "subsystem": "nbd", 00:24:05.976 "config": [] 00:24:05.976 } 00:24:05.976 ] 00:24:05.976 }' 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3504394 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3504394 ']' 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3504394 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3504394 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3504394' 00:24:05.976 killing process with pid 3504394 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3504394 00:24:05.976 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.976 00:24:05.976 Latency(us) 00:24:05.976 [2024-11-09T22:57:32.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.976 [2024-11-09T22:57:32.177Z] =================================================================================================================== 00:24:05.976 [2024-11-09T22:57:32.177Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:05.976 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3504394 00:24:06.909 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3503980 00:24:06.909 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3503980 ']' 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3503980 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3503980 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3503980' 00:24:06.910 killing process with pid 3503980 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3503980 00:24:06.910 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3503980 00:24:07.844 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:07.844 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.844 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.844 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.844 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:07.844 "subsystems": [ 00:24:07.844 { 00:24:07.844 "subsystem": "keyring", 00:24:07.844 "config": [ 00:24:07.844 { 00:24:07.844 "method": "keyring_file_add_key", 00:24:07.844 "params": { 00:24:07.844 "name": "key0", 00:24:07.844 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:07.844 } 00:24:07.844 } 00:24:07.844 ] 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "subsystem": "iobuf", 00:24:07.844 "config": [ 00:24:07.844 { 00:24:07.844 "method": "iobuf_set_options", 00:24:07.844 "params": { 00:24:07.844 "small_pool_count": 8192, 00:24:07.844 "large_pool_count": 1024, 00:24:07.844 "small_bufsize": 8192, 00:24:07.844 "large_bufsize": 135168, 00:24:07.844 "enable_numa": false 00:24:07.844 } 00:24:07.844 } 00:24:07.844 ] 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "subsystem": "sock", 00:24:07.844 "config": [ 00:24:07.844 { 00:24:07.844 "method": "sock_set_default_impl", 00:24:07.844 "params": { 00:24:07.844 "impl_name": "posix" 00:24:07.844 } 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "method": "sock_impl_set_options", 00:24:07.844 "params": { 00:24:07.844 "impl_name": "ssl", 00:24:07.844 "recv_buf_size": 4096, 00:24:07.844 "send_buf_size": 4096, 00:24:07.844 "enable_recv_pipe": true, 00:24:07.844 "enable_quickack": false, 00:24:07.844 "enable_placement_id": 0, 00:24:07.844 "enable_zerocopy_send_server": true, 00:24:07.844 "enable_zerocopy_send_client": false, 00:24:07.844 "zerocopy_threshold": 0, 00:24:07.844 "tls_version": 0, 00:24:07.844 "enable_ktls": false 00:24:07.844 } 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "method": "sock_impl_set_options", 00:24:07.844 "params": { 00:24:07.844 "impl_name": "posix", 00:24:07.844 "recv_buf_size": 2097152, 00:24:07.844 "send_buf_size": 2097152, 00:24:07.844 "enable_recv_pipe": true, 00:24:07.844 "enable_quickack": false, 00:24:07.844 "enable_placement_id": 0, 00:24:07.844 "enable_zerocopy_send_server": true, 00:24:07.844 "enable_zerocopy_send_client": false, 00:24:07.844 "zerocopy_threshold": 0, 00:24:07.844 "tls_version": 0, 00:24:07.844 "enable_ktls": false 00:24:07.844 } 00:24:07.844 } 00:24:07.844 ] 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "subsystem": "vmd", 00:24:07.844 "config": [] 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "subsystem": "accel", 00:24:07.844 "config": [ 00:24:07.844 { 00:24:07.844 "method": "accel_set_options", 00:24:07.844 "params": { 00:24:07.844 "small_cache_size": 128, 00:24:07.844 "large_cache_size": 16, 00:24:07.844 "task_count": 2048, 00:24:07.844 "sequence_count": 2048, 00:24:07.844 "buf_count": 2048 00:24:07.844 } 00:24:07.844 } 00:24:07.844 ] 00:24:07.844 }, 00:24:07.844 { 00:24:07.844 "subsystem": "bdev", 00:24:07.844 "config": [ 00:24:07.844 { 00:24:07.844 "method": "bdev_set_options", 00:24:07.844 "params": { 00:24:07.844 "bdev_io_pool_size": 65535, 00:24:07.844 "bdev_io_cache_size": 256, 00:24:07.844 "bdev_auto_examine": true, 00:24:07.844 "iobuf_small_cache_size": 128, 00:24:07.844 "iobuf_large_cache_size": 16 00:24:07.844 } 00:24:07.844 }, 00:24:07.844 { 00:24:07.845 "method": "bdev_raid_set_options", 00:24:07.845 "params": { 00:24:07.845 "process_window_size_kb": 1024, 00:24:07.845 "process_max_bandwidth_mb_sec": 0 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "bdev_iscsi_set_options", 00:24:07.845 "params": { 00:24:07.845 "timeout_sec": 30 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "bdev_nvme_set_options", 00:24:07.845 "params": { 00:24:07.845 "action_on_timeout": "none", 00:24:07.845 "timeout_us": 0, 00:24:07.845 "timeout_admin_us": 0, 00:24:07.845 "keep_alive_timeout_ms": 10000, 00:24:07.845 "arbitration_burst": 0, 00:24:07.845 "low_priority_weight": 0, 00:24:07.845 "medium_priority_weight": 0, 00:24:07.845 "high_priority_weight": 0, 00:24:07.845 "nvme_adminq_poll_period_us": 10000, 00:24:07.845 "nvme_ioq_poll_period_us": 0, 00:24:07.845 "io_queue_requests": 0, 00:24:07.845 "delay_cmd_submit": true, 00:24:07.845 "transport_retry_count": 4, 00:24:07.845 "bdev_retry_count": 3, 00:24:07.845 "transport_ack_timeout": 0, 00:24:07.845 "ctrlr_loss_timeout_sec": 0, 00:24:07.845 "reconnect_delay_sec": 0, 00:24:07.845 "fast_io_fail_timeout_sec": 0, 00:24:07.845 "disable_auto_failback": false, 00:24:07.845 "generate_uuids": false, 00:24:07.845 "transport_tos": 0, 00:24:07.845 "nvme_error_stat": false, 00:24:07.845 "rdma_srq_size": 0, 00:24:07.845 "io_path_stat": false, 00:24:07.845 "allow_accel_sequence": false, 00:24:07.845 "rdma_max_cq_size": 0, 00:24:07.845 "rdma_cm_event_timeout_ms": 0, 00:24:07.845 "dhchap_digests": [ 00:24:07.845 "sha256", 00:24:07.845 "sha384", 00:24:07.845 "sha512" 00:24:07.845 ], 00:24:07.845 "dhchap_dhgroups": [ 00:24:07.845 "null", 00:24:07.845 "ffdhe2048", 00:24:07.845 "ffdhe3072", 00:24:07.845 "ffdhe4096", 00:24:07.845 "ffdhe6144", 00:24:07.845 "ffdhe8192" 00:24:07.845 ] 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "bdev_nvme_set_hotplug", 00:24:07.845 "params": { 00:24:07.845 "period_us": 100000, 00:24:07.845 "enable": false 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "bdev_malloc_create", 00:24:07.845 "params": { 00:24:07.845 "name": "malloc0", 00:24:07.845 "num_blocks": 8192, 00:24:07.845 "block_size": 4096, 00:24:07.845 "physical_block_size": 4096, 00:24:07.845 "uuid": "72c7ffdd-60f2-4e1f-83a0-c56ebbb10cec", 00:24:07.845 "optimal_io_boundary": 0, 00:24:07.845 "md_size": 0, 00:24:07.845 "dif_type": 0, 00:24:07.845 "dif_is_head_of_md": false, 00:24:07.845 "dif_pi_format": 0 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "bdev_wait_for_examine" 00:24:07.845 } 00:24:07.845 ] 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "subsystem": "nbd", 00:24:07.845 "config": [] 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "subsystem": "scheduler", 00:24:07.845 "config": [ 00:24:07.845 { 00:24:07.845 "method": "framework_set_scheduler", 00:24:07.845 "params": { 00:24:07.845 "name": "static" 00:24:07.845 } 00:24:07.845 } 00:24:07.845 ] 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "subsystem": "nvmf", 00:24:07.845 "config": [ 00:24:07.845 { 00:24:07.845 "method": "nvmf_set_config", 00:24:07.845 "params": { 00:24:07.845 "discovery_filter": "match_any", 00:24:07.845 "admin_cmd_passthru": { 00:24:07.845 "identify_ctrlr": false 00:24:07.845 }, 00:24:07.845 "dhchap_digests": [ 00:24:07.845 "sha256", 00:24:07.845 "sha384", 00:24:07.845 "sha512" 00:24:07.845 ], 00:24:07.845 "dhchap_dhgroups": [ 00:24:07.845 "null", 00:24:07.845 "ffdhe2048", 00:24:07.845 "ffdhe3072", 00:24:07.845 "ffdhe4096", 00:24:07.845 "ffdhe6144", 00:24:07.845 "ffdhe8192" 00:24:07.845 ] 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_set_max_subsystems", 00:24:07.845 "params": { 00:24:07.845 "max_subsystems": 1024 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_set_crdt", 00:24:07.845 "params": { 00:24:07.845 "crdt1": 0, 00:24:07.845 "crdt2": 0, 00:24:07.845 "crdt3": 0 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_create_transport", 00:24:07.845 "params": { 00:24:07.845 "trtype": "TCP", 00:24:07.845 "max_queue_depth": 128, 00:24:07.845 "max_io_qpairs_per_ctrlr": 127, 00:24:07.845 "in_capsule_data_size": 4096, 00:24:07.845 "max_io_size": 131072, 00:24:07.845 "io_unit_size": 131072, 00:24:07.845 "max_aq_depth": 128, 00:24:07.845 "num_shared_buffers": 511, 00:24:07.845 "buf_cache_size": 4294967295, 00:24:07.845 "dif_insert_or_strip": false, 00:24:07.845 "zcopy": false, 00:24:07.845 "c2h_success": false, 00:24:07.845 "sock_priority": 0, 00:24:07.845 "abort_timeout_sec": 1, 00:24:07.845 "ack_timeout": 0, 00:24:07.845 "data_wr_pool_size": 0 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_create_subsystem", 00:24:07.845 "params": { 00:24:07.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.845 "allow_any_host": false, 00:24:07.845 "serial_number": "SPDK00000000000001", 00:24:07.845 "model_number": "SPDK bdev Controller", 00:24:07.845 "max_namespaces": 10, 00:24:07.845 "min_cntlid": 1, 00:24:07.845 "max_cntlid": 65519, 00:24:07.845 "ana_reporting": false 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_subsystem_add_host", 00:24:07.845 "params": { 00:24:07.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.845 "host": "nqn.2016-06.io.spdk:host1", 00:24:07.845 "psk": "key0" 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_subsystem_add_ns", 00:24:07.845 "params": { 00:24:07.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.845 "namespace": { 00:24:07.845 "nsid": 1, 00:24:07.845 "bdev_name": "malloc0", 00:24:07.845 "nguid": "72C7FFDD60F24E1F83A0C56EBBB10CEC", 00:24:07.845 "uuid": "72c7ffdd-60f2-4e1f-83a0-c56ebbb10cec", 00:24:07.845 "no_auto_visible": false 00:24:07.845 } 00:24:07.845 } 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "method": "nvmf_subsystem_add_listener", 00:24:07.845 "params": { 00:24:07.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.845 "listen_address": { 00:24:07.845 "trtype": "TCP", 00:24:07.845 "adrfam": "IPv4", 00:24:07.845 "traddr": "10.0.0.2", 00:24:07.845 "trsvcid": "4420" 00:24:07.845 }, 00:24:07.845 "secure_channel": true 00:24:07.845 } 00:24:07.845 } 00:24:07.845 ] 00:24:07.845 } 00:24:07.845 ] 00:24:07.845 }' 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3504939 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3504939 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3504939 ']' 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.845 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.104 [2024-11-09 23:57:34.092258] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:08.104 [2024-11-09 23:57:34.092399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.104 [2024-11-09 23:57:34.236446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.362 [2024-11-09 23:57:34.360814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.362 [2024-11-09 23:57:34.360894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.362 [2024-11-09 23:57:34.360932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.362 [2024-11-09 23:57:34.360972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.362 [2024-11-09 23:57:34.361002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.362 [2024-11-09 23:57:34.362879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.928 [2024-11-09 23:57:34.910938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.928 [2024-11-09 23:57:34.942984] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.928 [2024-11-09 23:57:34.943314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3505030 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3505030 /var/tmp/bdevperf.sock 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3505030 ']' 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:08.928 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:08.928 "subsystems": [ 00:24:08.928 { 00:24:08.928 "subsystem": "keyring", 00:24:08.928 "config": [ 00:24:08.928 { 00:24:08.928 "method": "keyring_file_add_key", 00:24:08.928 "params": { 00:24:08.928 "name": "key0", 00:24:08.928 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:08.928 } 00:24:08.928 } 00:24:08.928 ] 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "subsystem": "iobuf", 00:24:08.928 "config": [ 00:24:08.928 { 00:24:08.928 "method": "iobuf_set_options", 00:24:08.928 "params": { 00:24:08.928 "small_pool_count": 8192, 00:24:08.928 "large_pool_count": 1024, 00:24:08.928 "small_bufsize": 8192, 00:24:08.928 "large_bufsize": 135168, 00:24:08.928 "enable_numa": false 00:24:08.928 } 00:24:08.928 } 00:24:08.928 ] 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "subsystem": "sock", 00:24:08.928 "config": [ 00:24:08.928 { 00:24:08.928 "method": "sock_set_default_impl", 00:24:08.928 "params": { 00:24:08.928 "impl_name": "posix" 00:24:08.928 } 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "method": "sock_impl_set_options", 00:24:08.928 "params": { 00:24:08.928 "impl_name": "ssl", 00:24:08.928 "recv_buf_size": 4096, 00:24:08.928 "send_buf_size": 4096, 00:24:08.928 "enable_recv_pipe": true, 00:24:08.928 "enable_quickack": false, 00:24:08.928 "enable_placement_id": 0, 00:24:08.928 "enable_zerocopy_send_server": true, 00:24:08.928 "enable_zerocopy_send_client": false, 00:24:08.928 "zerocopy_threshold": 0, 00:24:08.928 "tls_version": 0, 00:24:08.928 "enable_ktls": false 00:24:08.928 } 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "method": "sock_impl_set_options", 00:24:08.928 "params": { 00:24:08.928 "impl_name": "posix", 00:24:08.928 "recv_buf_size": 2097152, 00:24:08.928 "send_buf_size": 2097152, 00:24:08.928 "enable_recv_pipe": true, 00:24:08.928 "enable_quickack": false, 00:24:08.928 "enable_placement_id": 0, 00:24:08.928 "enable_zerocopy_send_server": true, 00:24:08.928 "enable_zerocopy_send_client": false, 00:24:08.928 "zerocopy_threshold": 0, 00:24:08.928 "tls_version": 0, 00:24:08.928 "enable_ktls": false 00:24:08.928 } 00:24:08.928 } 00:24:08.928 ] 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "subsystem": "vmd", 00:24:08.928 "config": [] 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "subsystem": "accel", 00:24:08.928 "config": [ 00:24:08.928 { 00:24:08.928 "method": "accel_set_options", 00:24:08.928 "params": { 00:24:08.928 "small_cache_size": 128, 00:24:08.928 "large_cache_size": 16, 00:24:08.928 "task_count": 2048, 00:24:08.928 "sequence_count": 2048, 00:24:08.928 "buf_count": 2048 00:24:08.928 } 00:24:08.928 } 00:24:08.928 ] 00:24:08.928 }, 00:24:08.928 { 00:24:08.928 "subsystem": "bdev", 00:24:08.928 "config": [ 00:24:08.928 { 00:24:08.928 "method": "bdev_set_options", 00:24:08.928 "params": { 00:24:08.928 "bdev_io_pool_size": 65535, 00:24:08.928 "bdev_io_cache_size": 256, 00:24:08.928 "bdev_auto_examine": true, 00:24:08.928 "iobuf_small_cache_size": 128, 00:24:08.929 "iobuf_large_cache_size": 16 00:24:08.929 } 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "method": "bdev_raid_set_options", 00:24:08.929 "params": { 00:24:08.929 "process_window_size_kb": 1024, 00:24:08.929 "process_max_bandwidth_mb_sec": 0 00:24:08.929 } 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "method": "bdev_iscsi_set_options", 00:24:08.929 "params": { 00:24:08.929 "timeout_sec": 30 00:24:08.929 } 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "method": "bdev_nvme_set_options", 00:24:08.929 "params": { 00:24:08.929 "action_on_timeout": "none", 00:24:08.929 "timeout_us": 0, 00:24:08.929 "timeout_admin_us": 0, 00:24:08.929 "keep_alive_timeout_ms": 10000, 00:24:08.929 "arbitration_burst": 0, 00:24:08.929 "low_priority_weight": 0, 00:24:08.929 "medium_priority_weight": 0, 00:24:08.929 "high_priority_weight": 0, 00:24:08.929 "nvme_adminq_poll_period_us": 10000, 00:24:08.929 "nvme_ioq_poll_period_us": 0, 00:24:08.929 "io_queue_requests": 512, 00:24:08.929 "delay_cmd_submit": true, 00:24:08.929 "transport_retry_count": 4, 00:24:08.929 "bdev_retry_count": 3, 00:24:08.929 "transport_ack_timeout": 0, 00:24:08.929 "ctrlr_loss_timeout_sec": 0, 00:24:08.929 "reconnect_delay_sec": 0, 00:24:08.929 "fast_io_fail_timeout_sec": 0, 00:24:08.929 "disable_auto_failback": false, 00:24:08.929 "generate_uuids": false, 00:24:08.929 "transport_tos": 0, 00:24:08.929 "nvme_error_stat": false, 00:24:08.929 "rdma_srq_size": 0, 00:24:08.929 "io_path_stat": false, 00:24:08.929 "allow_accel_sequence": false, 00:24:08.929 "rdma_max_cq_size": 0, 00:24:08.929 "rdma_cm_event_timeout_ms": 0, 00:24:08.929 "dhchap_digests": [ 00:24:08.929 "sha256", 00:24:08.929 "sha384", 00:24:08.929 "sha512" 00:24:08.929 ], 00:24:08.929 "dhchap_dhgroups": [ 00:24:08.929 "null", 00:24:08.929 "ffdhe2048", 00:24:08.929 "ffdhe3072", 00:24:08.929 "ffdhe4096", 00:24:08.929 "ffdhe6144", 00:24:08.929 "ffdhe8192" 00:24:08.929 ] 00:24:08.929 } 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "method": "bdev_nvme_attach_controller", 00:24:08.929 "params": { 00:24:08.929 "name": "TLSTEST", 00:24:08.929 "trtype": "TCP", 00:24:08.929 "adrfam": "IPv4", 00:24:08.929 "traddr": "10.0.0.2", 00:24:08.929 "trsvcid": "4420", 00:24:08.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.929 "prchk_reftag": false, 00:24:08.929 "prchk_guard": false, 00:24:08.929 "ctrlr_loss_timeout_sec": 0, 00:24:08.929 "reconnect_delay_sec": 0, 00:24:08.929 "fast_io_fail_timeout_sec": 0, 00:24:08.929 "psk": "key0", 00:24:08.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.929 "hdgst": false, 00:24:08.929 "ddgst": false, 00:24:08.929 "multipath": "multipath" 00:24:08.929 } 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "method": "bdev_nvme_set_hotplug", 00:24:08.929 "params": { 00:24:08.929 "period_us": 100000, 00:24:08.929 "enable": false 00:24:08.929 } 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "method": "bdev_wait_for_examine" 00:24:08.929 } 00:24:08.929 ] 00:24:08.929 }, 00:24:08.929 { 00:24:08.929 "subsystem": "nbd", 00:24:08.929 "config": [] 00:24:08.929 } 00:24:08.929 ] 00:24:08.929 }' 00:24:08.929 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.929 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:08.929 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.187 [2024-11-09 23:57:35.188740] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:09.187 [2024-11-09 23:57:35.188887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505030 ] 00:24:09.187 [2024-11-09 23:57:35.336623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.445 [2024-11-09 23:57:35.456618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.703 [2024-11-09 23:57:35.848150] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.960 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:09.960 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:09.960 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:10.218 Running I/O for 10 seconds... 00:24:12.085 2608.00 IOPS, 10.19 MiB/s [2024-11-09T22:57:39.660Z] 2611.50 IOPS, 10.20 MiB/s [2024-11-09T22:57:40.591Z] 2627.33 IOPS, 10.26 MiB/s [2024-11-09T22:57:41.525Z] 2637.50 IOPS, 10.30 MiB/s [2024-11-09T22:57:42.459Z] 2648.00 IOPS, 10.34 MiB/s [2024-11-09T22:57:43.393Z] 2656.83 IOPS, 10.38 MiB/s [2024-11-09T22:57:44.326Z] 2663.43 IOPS, 10.40 MiB/s [2024-11-09T22:57:45.701Z] 2654.25 IOPS, 10.37 MiB/s [2024-11-09T22:57:46.635Z] 2662.78 IOPS, 10.40 MiB/s [2024-11-09T22:57:46.635Z] 2664.90 IOPS, 10.41 MiB/s 00:24:20.434 Latency(us) 00:24:20.434 [2024-11-09T22:57:46.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.434 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.434 Verification LBA range: start 0x0 length 0x2000 00:24:20.434 TLSTESTn1 : 10.03 2670.81 10.43 0.00 0.00 47840.86 8349.77 73788.68 00:24:20.434 [2024-11-09T22:57:46.635Z] =================================================================================================================== 00:24:20.434 [2024-11-09T22:57:46.635Z] Total : 2670.81 10.43 0.00 0.00 47840.86 8349.77 73788.68 00:24:20.434 { 00:24:20.434 "results": [ 00:24:20.434 { 00:24:20.434 "job": "TLSTESTn1", 00:24:20.434 "core_mask": "0x4", 00:24:20.434 "workload": "verify", 00:24:20.434 "status": "finished", 00:24:20.434 "verify_range": { 00:24:20.434 "start": 0, 00:24:20.434 "length": 8192 00:24:20.434 }, 00:24:20.434 "queue_depth": 128, 00:24:20.434 "io_size": 4096, 00:24:20.434 "runtime": 10.025433, 00:24:20.434 "iops": 2670.807335703106, 00:24:20.434 "mibps": 10.432841155090259, 00:24:20.434 "io_failed": 0, 00:24:20.434 "io_timeout": 0, 00:24:20.434 "avg_latency_us": 47840.855755402845, 00:24:20.434 "min_latency_us": 8349.771851851852, 00:24:20.434 "max_latency_us": 73788.68148148149 00:24:20.434 } 00:24:20.434 ], 00:24:20.434 "core_count": 1 00:24:20.434 } 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3505030 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3505030 ']' 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3505030 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3505030 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3505030' 00:24:20.434 killing process with pid 3505030 00:24:20.434 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3505030 00:24:20.434 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.434 00:24:20.434 Latency(us) 00:24:20.434 [2024-11-09T22:57:46.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.434 [2024-11-09T22:57:46.635Z] =================================================================================================================== 00:24:20.434 [2024-11-09T22:57:46.635Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.435 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3505030 00:24:21.001 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3504939 00:24:21.001 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3504939 ']' 00:24:21.001 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3504939 00:24:21.001 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:21.001 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.001 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3504939 00:24:21.259 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:21.259 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:21.259 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3504939' 00:24:21.259 killing process with pid 3504939 00:24:21.259 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3504939 00:24:21.259 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3504939 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3506556 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3506556 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3506556 ']' 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:22.633 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.633 [2024-11-09 23:57:48.587601] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:22.633 [2024-11-09 23:57:48.587758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.633 [2024-11-09 23:57:48.741561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.891 [2024-11-09 23:57:48.878071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.891 [2024-11-09 23:57:48.878152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.891 [2024-11-09 23:57:48.878178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.891 [2024-11-09 23:57:48.878207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.891 [2024-11-09 23:57:48.878227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.891 [2024-11-09 23:57:48.879877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.WSVA3YIoA1 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WSVA3YIoA1 00:24:23.456 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.714 [2024-11-09 23:57:49.792109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.714 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:23.971 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:24.229 [2024-11-09 23:57:50.365814] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.229 [2024-11-09 23:57:50.366183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.229 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:24.487 malloc0 00:24:24.744 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:25.002 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:24:25.260 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3506975 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3506975 /var/tmp/bdevperf.sock 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3506975 ']' 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:25.518 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.518 [2024-11-09 23:57:51.584469] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:25.518 [2024-11-09 23:57:51.584617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506975 ] 00:24:25.776 [2024-11-09 23:57:51.725615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.776 [2024-11-09 23:57:51.861793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.343 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:26.343 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:26.343 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:24:26.947 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:26.947 [2024-11-09 23:57:53.054305] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.231 nvme0n1 00:24:27.231 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.231 Running I/O for 1 seconds... 00:24:28.165 2485.00 IOPS, 9.71 MiB/s 00:24:28.165 Latency(us) 00:24:28.165 [2024-11-09T22:57:54.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.165 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:28.165 Verification LBA range: start 0x0 length 0x2000 00:24:28.165 nvme0n1 : 1.03 2546.63 9.95 0.00 0.00 49684.03 8932.31 41748.86 00:24:28.165 [2024-11-09T22:57:54.366Z] =================================================================================================================== 00:24:28.165 [2024-11-09T22:57:54.366Z] Total : 2546.63 9.95 0.00 0.00 49684.03 8932.31 41748.86 00:24:28.165 { 00:24:28.165 "results": [ 00:24:28.165 { 00:24:28.165 "job": "nvme0n1", 00:24:28.165 "core_mask": "0x2", 00:24:28.165 "workload": "verify", 00:24:28.165 "status": "finished", 00:24:28.165 "verify_range": { 00:24:28.165 "start": 0, 00:24:28.165 "length": 8192 00:24:28.165 }, 00:24:28.165 "queue_depth": 128, 00:24:28.165 "io_size": 4096, 00:24:28.165 "runtime": 1.026061, 00:24:28.165 "iops": 2546.6322177726274, 00:24:28.165 "mibps": 9.947782100674326, 00:24:28.165 "io_failed": 0, 00:24:28.165 "io_timeout": 0, 00:24:28.165 "avg_latency_us": 49684.027733696195, 00:24:28.165 "min_latency_us": 8932.314074074075, 00:24:28.165 "max_latency_us": 41748.85925925926 00:24:28.165 } 00:24:28.165 ], 00:24:28.165 "core_count": 1 00:24:28.165 } 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3506975 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3506975 ']' 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3506975 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3506975 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3506975' 00:24:28.165 killing process with pid 3506975 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3506975 00:24:28.165 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.165 00:24:28.165 Latency(us) 00:24:28.165 [2024-11-09T22:57:54.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.165 [2024-11-09T22:57:54.366Z] =================================================================================================================== 00:24:28.165 [2024-11-09T22:57:54.366Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.165 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3506975 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3506556 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3506556 ']' 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3506556 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3506556 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3506556' 00:24:29.100 killing process with pid 3506556 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3506556 00:24:29.100 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3506556 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3507526 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3507526 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3507526 ']' 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:30.474 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.474 [2024-11-09 23:57:56.610442] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:30.474 [2024-11-09 23:57:56.610583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.732 [2024-11-09 23:57:56.759481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.732 [2024-11-09 23:57:56.902191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.732 [2024-11-09 23:57:56.902282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.732 [2024-11-09 23:57:56.902314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.732 [2024-11-09 23:57:56.902340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.732 [2024-11-09 23:57:56.902360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.732 [2024-11-09 23:57:56.904046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.667 [2024-11-09 23:57:57.693845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.667 malloc0 00:24:31.667 [2024-11-09 23:57:57.753918] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.667 [2024-11-09 23:57:57.754293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3507681 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3507681 /var/tmp/bdevperf.sock 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3507681 ']' 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:31.667 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.667 [2024-11-09 23:57:57.866014] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:31.667 [2024-11-09 23:57:57.866136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507681 ] 00:24:31.925 [2024-11-09 23:57:57.998456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.925 [2024-11-09 23:57:58.126606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.860 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:32.860 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:32.860 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WSVA3YIoA1 00:24:33.117 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:33.376 [2024-11-09 23:57:59.460717] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.376 nvme0n1 00:24:33.376 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.632 Running I/O for 1 seconds... 00:24:34.566 2415.00 IOPS, 9.43 MiB/s 00:24:34.566 Latency(us) 00:24:34.566 [2024-11-09T22:58:00.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.566 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.566 Verification LBA range: start 0x0 length 0x2000 00:24:34.566 nvme0n1 : 1.03 2476.22 9.67 0.00 0.00 51082.36 8349.77 37865.24 00:24:34.566 [2024-11-09T22:58:00.767Z] =================================================================================================================== 00:24:34.566 [2024-11-09T22:58:00.767Z] Total : 2476.22 9.67 0.00 0.00 51082.36 8349.77 37865.24 00:24:34.566 { 00:24:34.566 "results": [ 00:24:34.566 { 00:24:34.566 "job": "nvme0n1", 00:24:34.566 "core_mask": "0x2", 00:24:34.566 "workload": "verify", 00:24:34.566 "status": "finished", 00:24:34.566 "verify_range": { 00:24:34.566 "start": 0, 00:24:34.566 "length": 8192 00:24:34.566 }, 00:24:34.566 "queue_depth": 128, 00:24:34.566 "io_size": 4096, 00:24:34.566 "runtime": 1.026968, 00:24:34.566 "iops": 2476.221264927437, 00:24:34.566 "mibps": 9.6727393161228, 00:24:34.566 "io_failed": 0, 00:24:34.566 "io_timeout": 0, 00:24:34.566 "avg_latency_us": 51082.3631138492, 00:24:34.566 "min_latency_us": 8349.771851851852, 00:24:34.566 "max_latency_us": 37865.24444444444 00:24:34.566 } 00:24:34.566 ], 00:24:34.566 "core_count": 1 00:24:34.566 } 00:24:34.566 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:34.566 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.566 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.824 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.824 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:34.824 "subsystems": [ 00:24:34.824 { 00:24:34.824 "subsystem": "keyring", 00:24:34.824 "config": [ 00:24:34.824 { 00:24:34.824 "method": "keyring_file_add_key", 00:24:34.824 "params": { 00:24:34.824 "name": "key0", 00:24:34.824 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:34.824 } 00:24:34.824 } 00:24:34.824 ] 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "subsystem": "iobuf", 00:24:34.824 "config": [ 00:24:34.824 { 00:24:34.824 "method": "iobuf_set_options", 00:24:34.824 "params": { 00:24:34.824 "small_pool_count": 8192, 00:24:34.824 "large_pool_count": 1024, 00:24:34.824 "small_bufsize": 8192, 00:24:34.824 "large_bufsize": 135168, 00:24:34.824 "enable_numa": false 00:24:34.824 } 00:24:34.824 } 00:24:34.824 ] 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "subsystem": "sock", 00:24:34.824 "config": [ 00:24:34.824 { 00:24:34.824 "method": "sock_set_default_impl", 00:24:34.824 "params": { 00:24:34.824 "impl_name": "posix" 00:24:34.824 } 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "method": "sock_impl_set_options", 00:24:34.824 "params": { 00:24:34.824 "impl_name": "ssl", 00:24:34.824 "recv_buf_size": 4096, 00:24:34.824 "send_buf_size": 4096, 00:24:34.824 "enable_recv_pipe": true, 00:24:34.824 "enable_quickack": false, 00:24:34.824 "enable_placement_id": 0, 00:24:34.824 "enable_zerocopy_send_server": true, 00:24:34.824 "enable_zerocopy_send_client": false, 00:24:34.824 "zerocopy_threshold": 0, 00:24:34.824 "tls_version": 0, 00:24:34.824 "enable_ktls": false 00:24:34.824 } 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "method": "sock_impl_set_options", 00:24:34.824 "params": { 00:24:34.824 "impl_name": "posix", 00:24:34.824 "recv_buf_size": 2097152, 00:24:34.824 "send_buf_size": 2097152, 00:24:34.824 "enable_recv_pipe": true, 00:24:34.824 "enable_quickack": false, 00:24:34.824 "enable_placement_id": 0, 00:24:34.824 "enable_zerocopy_send_server": true, 00:24:34.824 "enable_zerocopy_send_client": false, 00:24:34.824 "zerocopy_threshold": 0, 00:24:34.824 "tls_version": 0, 00:24:34.824 "enable_ktls": false 00:24:34.824 } 00:24:34.824 } 00:24:34.824 ] 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "subsystem": "vmd", 00:24:34.824 "config": [] 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "subsystem": "accel", 00:24:34.824 "config": [ 00:24:34.824 { 00:24:34.824 "method": "accel_set_options", 00:24:34.824 "params": { 00:24:34.824 "small_cache_size": 128, 00:24:34.824 "large_cache_size": 16, 00:24:34.824 "task_count": 2048, 00:24:34.824 "sequence_count": 2048, 00:24:34.824 "buf_count": 2048 00:24:34.824 } 00:24:34.824 } 00:24:34.824 ] 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "subsystem": "bdev", 00:24:34.824 "config": [ 00:24:34.824 { 00:24:34.824 "method": "bdev_set_options", 00:24:34.824 "params": { 00:24:34.824 "bdev_io_pool_size": 65535, 00:24:34.824 "bdev_io_cache_size": 256, 00:24:34.824 "bdev_auto_examine": true, 00:24:34.824 "iobuf_small_cache_size": 128, 00:24:34.824 "iobuf_large_cache_size": 16 00:24:34.824 } 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "method": "bdev_raid_set_options", 00:24:34.824 "params": { 00:24:34.824 "process_window_size_kb": 1024, 00:24:34.824 "process_max_bandwidth_mb_sec": 0 00:24:34.824 } 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "method": "bdev_iscsi_set_options", 00:24:34.824 "params": { 00:24:34.824 "timeout_sec": 30 00:24:34.824 } 00:24:34.824 }, 00:24:34.824 { 00:24:34.824 "method": "bdev_nvme_set_options", 00:24:34.824 "params": { 00:24:34.824 "action_on_timeout": "none", 00:24:34.824 "timeout_us": 0, 00:24:34.824 "timeout_admin_us": 0, 00:24:34.824 "keep_alive_timeout_ms": 10000, 00:24:34.824 "arbitration_burst": 0, 00:24:34.824 "low_priority_weight": 0, 00:24:34.824 "medium_priority_weight": 0, 00:24:34.824 "high_priority_weight": 0, 00:24:34.824 "nvme_adminq_poll_period_us": 10000, 00:24:34.824 "nvme_ioq_poll_period_us": 0, 00:24:34.824 "io_queue_requests": 0, 00:24:34.824 "delay_cmd_submit": true, 00:24:34.824 "transport_retry_count": 4, 00:24:34.824 "bdev_retry_count": 3, 00:24:34.824 "transport_ack_timeout": 0, 00:24:34.825 "ctrlr_loss_timeout_sec": 0, 00:24:34.825 "reconnect_delay_sec": 0, 00:24:34.825 "fast_io_fail_timeout_sec": 0, 00:24:34.825 "disable_auto_failback": false, 00:24:34.825 "generate_uuids": false, 00:24:34.825 "transport_tos": 0, 00:24:34.825 "nvme_error_stat": false, 00:24:34.825 "rdma_srq_size": 0, 00:24:34.825 "io_path_stat": false, 00:24:34.825 "allow_accel_sequence": false, 00:24:34.825 "rdma_max_cq_size": 0, 00:24:34.825 "rdma_cm_event_timeout_ms": 0, 00:24:34.825 "dhchap_digests": [ 00:24:34.825 "sha256", 00:24:34.825 "sha384", 00:24:34.825 "sha512" 00:24:34.825 ], 00:24:34.825 "dhchap_dhgroups": [ 00:24:34.825 "null", 00:24:34.825 "ffdhe2048", 00:24:34.825 "ffdhe3072", 00:24:34.825 "ffdhe4096", 00:24:34.825 "ffdhe6144", 00:24:34.825 "ffdhe8192" 00:24:34.825 ] 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "bdev_nvme_set_hotplug", 00:24:34.825 "params": { 00:24:34.825 "period_us": 100000, 00:24:34.825 "enable": false 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "bdev_malloc_create", 00:24:34.825 "params": { 00:24:34.825 "name": "malloc0", 00:24:34.825 "num_blocks": 8192, 00:24:34.825 "block_size": 4096, 00:24:34.825 "physical_block_size": 4096, 00:24:34.825 "uuid": "837b2fda-a26e-4be7-83fa-abc7c63c19ed", 00:24:34.825 "optimal_io_boundary": 0, 00:24:34.825 "md_size": 0, 00:24:34.825 "dif_type": 0, 00:24:34.825 "dif_is_head_of_md": false, 00:24:34.825 "dif_pi_format": 0 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "bdev_wait_for_examine" 00:24:34.825 } 00:24:34.825 ] 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "subsystem": "nbd", 00:24:34.825 "config": [] 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "subsystem": "scheduler", 00:24:34.825 "config": [ 00:24:34.825 { 00:24:34.825 "method": "framework_set_scheduler", 00:24:34.825 "params": { 00:24:34.825 "name": "static" 00:24:34.825 } 00:24:34.825 } 00:24:34.825 ] 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "subsystem": "nvmf", 00:24:34.825 "config": [ 00:24:34.825 { 00:24:34.825 "method": "nvmf_set_config", 00:24:34.825 "params": { 00:24:34.825 "discovery_filter": "match_any", 00:24:34.825 "admin_cmd_passthru": { 00:24:34.825 "identify_ctrlr": false 00:24:34.825 }, 00:24:34.825 "dhchap_digests": [ 00:24:34.825 "sha256", 00:24:34.825 "sha384", 00:24:34.825 "sha512" 00:24:34.825 ], 00:24:34.825 "dhchap_dhgroups": [ 00:24:34.825 "null", 00:24:34.825 "ffdhe2048", 00:24:34.825 "ffdhe3072", 00:24:34.825 "ffdhe4096", 00:24:34.825 "ffdhe6144", 00:24:34.825 "ffdhe8192" 00:24:34.825 ] 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_set_max_subsystems", 00:24:34.825 "params": { 00:24:34.825 "max_subsystems": 1024 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_set_crdt", 00:24:34.825 "params": { 00:24:34.825 "crdt1": 0, 00:24:34.825 "crdt2": 0, 00:24:34.825 "crdt3": 0 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_create_transport", 00:24:34.825 "params": { 00:24:34.825 "trtype": "TCP", 00:24:34.825 "max_queue_depth": 128, 00:24:34.825 "max_io_qpairs_per_ctrlr": 127, 00:24:34.825 "in_capsule_data_size": 4096, 00:24:34.825 "max_io_size": 131072, 00:24:34.825 "io_unit_size": 131072, 00:24:34.825 "max_aq_depth": 128, 00:24:34.825 "num_shared_buffers": 511, 00:24:34.825 "buf_cache_size": 4294967295, 00:24:34.825 "dif_insert_or_strip": false, 00:24:34.825 "zcopy": false, 00:24:34.825 "c2h_success": false, 00:24:34.825 "sock_priority": 0, 00:24:34.825 "abort_timeout_sec": 1, 00:24:34.825 "ack_timeout": 0, 00:24:34.825 "data_wr_pool_size": 0 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_create_subsystem", 00:24:34.825 "params": { 00:24:34.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.825 "allow_any_host": false, 00:24:34.825 "serial_number": "00000000000000000000", 00:24:34.825 "model_number": "SPDK bdev Controller", 00:24:34.825 "max_namespaces": 32, 00:24:34.825 "min_cntlid": 1, 00:24:34.825 "max_cntlid": 65519, 00:24:34.825 "ana_reporting": false 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_subsystem_add_host", 00:24:34.825 "params": { 00:24:34.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.825 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.825 "psk": "key0" 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_subsystem_add_ns", 00:24:34.825 "params": { 00:24:34.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.825 "namespace": { 00:24:34.825 "nsid": 1, 00:24:34.825 "bdev_name": "malloc0", 00:24:34.825 "nguid": "837B2FDAA26E4BE783FAABC7C63C19ED", 00:24:34.825 "uuid": "837b2fda-a26e-4be7-83fa-abc7c63c19ed", 00:24:34.825 "no_auto_visible": false 00:24:34.825 } 00:24:34.825 } 00:24:34.825 }, 00:24:34.825 { 00:24:34.825 "method": "nvmf_subsystem_add_listener", 00:24:34.825 "params": { 00:24:34.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.825 "listen_address": { 00:24:34.825 "trtype": "TCP", 00:24:34.825 "adrfam": "IPv4", 00:24:34.825 "traddr": "10.0.0.2", 00:24:34.825 "trsvcid": "4420" 00:24:34.825 }, 00:24:34.825 "secure_channel": false, 00:24:34.825 "sock_impl": "ssl" 00:24:34.825 } 00:24:34.825 } 00:24:34.825 ] 00:24:34.825 } 00:24:34.825 ] 00:24:34.825 }' 00:24:34.825 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:35.084 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:35.084 "subsystems": [ 00:24:35.084 { 00:24:35.084 "subsystem": "keyring", 00:24:35.084 "config": [ 00:24:35.084 { 00:24:35.084 "method": "keyring_file_add_key", 00:24:35.084 "params": { 00:24:35.084 "name": "key0", 00:24:35.084 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:35.084 } 00:24:35.084 } 00:24:35.084 ] 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "subsystem": "iobuf", 00:24:35.084 "config": [ 00:24:35.084 { 00:24:35.084 "method": "iobuf_set_options", 00:24:35.084 "params": { 00:24:35.084 "small_pool_count": 8192, 00:24:35.084 "large_pool_count": 1024, 00:24:35.084 "small_bufsize": 8192, 00:24:35.084 "large_bufsize": 135168, 00:24:35.084 "enable_numa": false 00:24:35.084 } 00:24:35.084 } 00:24:35.084 ] 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "subsystem": "sock", 00:24:35.084 "config": [ 00:24:35.084 { 00:24:35.084 "method": "sock_set_default_impl", 00:24:35.084 "params": { 00:24:35.084 "impl_name": "posix" 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "sock_impl_set_options", 00:24:35.084 "params": { 00:24:35.084 "impl_name": "ssl", 00:24:35.084 "recv_buf_size": 4096, 00:24:35.084 "send_buf_size": 4096, 00:24:35.084 "enable_recv_pipe": true, 00:24:35.084 "enable_quickack": false, 00:24:35.084 "enable_placement_id": 0, 00:24:35.084 "enable_zerocopy_send_server": true, 00:24:35.084 "enable_zerocopy_send_client": false, 00:24:35.084 "zerocopy_threshold": 0, 00:24:35.084 "tls_version": 0, 00:24:35.084 "enable_ktls": false 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "sock_impl_set_options", 00:24:35.084 "params": { 00:24:35.084 "impl_name": "posix", 00:24:35.084 "recv_buf_size": 2097152, 00:24:35.084 "send_buf_size": 2097152, 00:24:35.084 "enable_recv_pipe": true, 00:24:35.084 "enable_quickack": false, 00:24:35.084 "enable_placement_id": 0, 00:24:35.084 "enable_zerocopy_send_server": true, 00:24:35.084 "enable_zerocopy_send_client": false, 00:24:35.084 "zerocopy_threshold": 0, 00:24:35.084 "tls_version": 0, 00:24:35.084 "enable_ktls": false 00:24:35.084 } 00:24:35.084 } 00:24:35.084 ] 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "subsystem": "vmd", 00:24:35.084 "config": [] 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "subsystem": "accel", 00:24:35.084 "config": [ 00:24:35.084 { 00:24:35.084 "method": "accel_set_options", 00:24:35.084 "params": { 00:24:35.084 "small_cache_size": 128, 00:24:35.084 "large_cache_size": 16, 00:24:35.084 "task_count": 2048, 00:24:35.084 "sequence_count": 2048, 00:24:35.084 "buf_count": 2048 00:24:35.084 } 00:24:35.084 } 00:24:35.084 ] 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "subsystem": "bdev", 00:24:35.084 "config": [ 00:24:35.084 { 00:24:35.084 "method": "bdev_set_options", 00:24:35.084 "params": { 00:24:35.084 "bdev_io_pool_size": 65535, 00:24:35.084 "bdev_io_cache_size": 256, 00:24:35.084 "bdev_auto_examine": true, 00:24:35.084 "iobuf_small_cache_size": 128, 00:24:35.084 "iobuf_large_cache_size": 16 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_raid_set_options", 00:24:35.084 "params": { 00:24:35.084 "process_window_size_kb": 1024, 00:24:35.084 "process_max_bandwidth_mb_sec": 0 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_iscsi_set_options", 00:24:35.084 "params": { 00:24:35.084 "timeout_sec": 30 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_nvme_set_options", 00:24:35.084 "params": { 00:24:35.084 "action_on_timeout": "none", 00:24:35.084 "timeout_us": 0, 00:24:35.084 "timeout_admin_us": 0, 00:24:35.084 "keep_alive_timeout_ms": 10000, 00:24:35.084 "arbitration_burst": 0, 00:24:35.084 "low_priority_weight": 0, 00:24:35.084 "medium_priority_weight": 0, 00:24:35.084 "high_priority_weight": 0, 00:24:35.084 "nvme_adminq_poll_period_us": 10000, 00:24:35.084 "nvme_ioq_poll_period_us": 0, 00:24:35.084 "io_queue_requests": 512, 00:24:35.084 "delay_cmd_submit": true, 00:24:35.084 "transport_retry_count": 4, 00:24:35.084 "bdev_retry_count": 3, 00:24:35.084 "transport_ack_timeout": 0, 00:24:35.084 "ctrlr_loss_timeout_sec": 0, 00:24:35.084 "reconnect_delay_sec": 0, 00:24:35.084 "fast_io_fail_timeout_sec": 0, 00:24:35.084 "disable_auto_failback": false, 00:24:35.084 "generate_uuids": false, 00:24:35.084 "transport_tos": 0, 00:24:35.084 "nvme_error_stat": false, 00:24:35.084 "rdma_srq_size": 0, 00:24:35.084 "io_path_stat": false, 00:24:35.084 "allow_accel_sequence": false, 00:24:35.084 "rdma_max_cq_size": 0, 00:24:35.084 "rdma_cm_event_timeout_ms": 0, 00:24:35.084 "dhchap_digests": [ 00:24:35.084 "sha256", 00:24:35.084 "sha384", 00:24:35.084 "sha512" 00:24:35.084 ], 00:24:35.084 "dhchap_dhgroups": [ 00:24:35.084 "null", 00:24:35.084 "ffdhe2048", 00:24:35.084 "ffdhe3072", 00:24:35.084 "ffdhe4096", 00:24:35.084 "ffdhe6144", 00:24:35.084 "ffdhe8192" 00:24:35.084 ] 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_nvme_attach_controller", 00:24:35.084 "params": { 00:24:35.084 "name": "nvme0", 00:24:35.084 "trtype": "TCP", 00:24:35.084 "adrfam": "IPv4", 00:24:35.084 "traddr": "10.0.0.2", 00:24:35.084 "trsvcid": "4420", 00:24:35.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.084 "prchk_reftag": false, 00:24:35.084 "prchk_guard": false, 00:24:35.084 "ctrlr_loss_timeout_sec": 0, 00:24:35.084 "reconnect_delay_sec": 0, 00:24:35.084 "fast_io_fail_timeout_sec": 0, 00:24:35.084 "psk": "key0", 00:24:35.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.084 "hdgst": false, 00:24:35.084 "ddgst": false, 00:24:35.084 "multipath": "multipath" 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_nvme_set_hotplug", 00:24:35.084 "params": { 00:24:35.084 "period_us": 100000, 00:24:35.084 "enable": false 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_enable_histogram", 00:24:35.084 "params": { 00:24:35.084 "name": "nvme0n1", 00:24:35.084 "enable": true 00:24:35.084 } 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "method": "bdev_wait_for_examine" 00:24:35.084 } 00:24:35.084 ] 00:24:35.084 }, 00:24:35.084 { 00:24:35.084 "subsystem": "nbd", 00:24:35.084 "config": [] 00:24:35.084 } 00:24:35.084 ] 00:24:35.084 }' 00:24:35.084 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3507681 00:24:35.084 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3507681 ']' 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3507681 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3507681 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3507681' 00:24:35.085 killing process with pid 3507681 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3507681 00:24:35.085 Received shutdown signal, test time was about 1.000000 seconds 00:24:35.085 00:24:35.085 Latency(us) 00:24:35.085 [2024-11-09T22:58:01.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.085 [2024-11-09T22:58:01.286Z] =================================================================================================================== 00:24:35.085 [2024-11-09T22:58:01.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.085 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3507681 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3507526 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3507526 ']' 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3507526 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3507526 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3507526' 00:24:36.018 killing process with pid 3507526 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3507526 00:24:36.018 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3507526 00:24:37.393 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:37.393 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:37.393 "subsystems": [ 00:24:37.394 { 00:24:37.394 "subsystem": "keyring", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "keyring_file_add_key", 00:24:37.394 "params": { 00:24:37.394 "name": "key0", 00:24:37.394 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:37.394 } 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "iobuf", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "iobuf_set_options", 00:24:37.394 "params": { 00:24:37.394 "small_pool_count": 8192, 00:24:37.394 "large_pool_count": 1024, 00:24:37.394 "small_bufsize": 8192, 00:24:37.394 "large_bufsize": 135168, 00:24:37.394 "enable_numa": false 00:24:37.394 } 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "sock", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "sock_set_default_impl", 00:24:37.394 "params": { 00:24:37.394 "impl_name": "posix" 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "sock_impl_set_options", 00:24:37.394 "params": { 00:24:37.394 "impl_name": "ssl", 00:24:37.394 "recv_buf_size": 4096, 00:24:37.394 "send_buf_size": 4096, 00:24:37.394 "enable_recv_pipe": true, 00:24:37.394 "enable_quickack": false, 00:24:37.394 "enable_placement_id": 0, 00:24:37.394 "enable_zerocopy_send_server": true, 00:24:37.394 "enable_zerocopy_send_client": false, 00:24:37.394 "zerocopy_threshold": 0, 00:24:37.394 "tls_version": 0, 00:24:37.394 "enable_ktls": false 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "sock_impl_set_options", 00:24:37.394 "params": { 00:24:37.394 "impl_name": "posix", 00:24:37.394 "recv_buf_size": 2097152, 00:24:37.394 "send_buf_size": 2097152, 00:24:37.394 "enable_recv_pipe": true, 00:24:37.394 "enable_quickack": false, 00:24:37.394 "enable_placement_id": 0, 00:24:37.394 "enable_zerocopy_send_server": true, 00:24:37.394 "enable_zerocopy_send_client": false, 00:24:37.394 "zerocopy_threshold": 0, 00:24:37.394 "tls_version": 0, 00:24:37.394 "enable_ktls": false 00:24:37.394 } 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "vmd", 00:24:37.394 "config": [] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "accel", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "accel_set_options", 00:24:37.394 "params": { 00:24:37.394 "small_cache_size": 128, 00:24:37.394 "large_cache_size": 16, 00:24:37.394 "task_count": 2048, 00:24:37.394 "sequence_count": 2048, 00:24:37.394 "buf_count": 2048 00:24:37.394 } 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "bdev", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "bdev_set_options", 00:24:37.394 "params": { 00:24:37.394 "bdev_io_pool_size": 65535, 00:24:37.394 "bdev_io_cache_size": 256, 00:24:37.394 "bdev_auto_examine": true, 00:24:37.394 "iobuf_small_cache_size": 128, 00:24:37.394 "iobuf_large_cache_size": 16 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "bdev_raid_set_options", 00:24:37.394 "params": { 00:24:37.394 "process_window_size_kb": 1024, 00:24:37.394 "process_max_bandwidth_mb_sec": 0 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "bdev_iscsi_set_options", 00:24:37.394 "params": { 00:24:37.394 "timeout_sec": 30 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "bdev_nvme_set_options", 00:24:37.394 "params": { 00:24:37.394 "action_on_timeout": "none", 00:24:37.394 "timeout_us": 0, 00:24:37.394 "timeout_admin_us": 0, 00:24:37.394 "keep_alive_timeout_ms": 10000, 00:24:37.394 "arbitration_burst": 0, 00:24:37.394 "low_priority_weight": 0, 00:24:37.394 "medium_priority_weight": 0, 00:24:37.394 "high_priority_weight": 0, 00:24:37.394 "nvme_adminq_poll_period_us": 10000, 00:24:37.394 "nvme_ioq_poll_period_us": 0, 00:24:37.394 "io_queue_requests": 0, 00:24:37.394 "delay_cmd_submit": true, 00:24:37.394 "transport_retry_count": 4, 00:24:37.394 "bdev_retry_count": 3, 00:24:37.394 "transport_ack_timeout": 0, 00:24:37.394 "ctrlr_loss_timeout_sec": 0, 00:24:37.394 "reconnect_delay_sec": 0, 00:24:37.394 "fast_io_fail_timeout_sec": 0, 00:24:37.394 "disable_auto_failback": false, 00:24:37.394 "generate_uuids": false, 00:24:37.394 "transport_tos": 0, 00:24:37.394 "nvme_error_stat": false, 00:24:37.394 "rdma_srq_size": 0, 00:24:37.394 "io_path_stat": false, 00:24:37.394 "allow_accel_sequence": false, 00:24:37.394 "rdma_max_cq_size": 0, 00:24:37.394 "rdma_cm_event_timeout_ms": 0, 00:24:37.394 "dhchap_digests": [ 00:24:37.394 "sha256", 00:24:37.394 "sha384", 00:24:37.394 "sha512" 00:24:37.394 ], 00:24:37.394 "dhchap_dhgroups": [ 00:24:37.394 "null", 00:24:37.394 "ffdhe2048", 00:24:37.394 "ffdhe3072", 00:24:37.394 "ffdhe4096", 00:24:37.394 "ffdhe6144", 00:24:37.394 "ffdhe8192" 00:24:37.394 ] 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "bdev_nvme_set_hotplug", 00:24:37.394 "params": { 00:24:37.394 "period_us": 100000, 00:24:37.394 "enable": false 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "bdev_malloc_create", 00:24:37.394 "params": { 00:24:37.394 "name": "malloc0", 00:24:37.394 "num_blocks": 8192, 00:24:37.394 "block_size": 4096, 00:24:37.394 "physical_block_size": 4096, 00:24:37.394 "uuid": "837b2fda-a26e-4be7-83fa-abc7c63c19ed", 00:24:37.394 "optimal_io_boundary": 0, 00:24:37.394 "md_size": 0, 00:24:37.394 "dif_type": 0, 00:24:37.394 "dif_is_head_of_md": false, 00:24:37.394 "dif_pi_format": 0 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "bdev_wait_for_examine" 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "nbd", 00:24:37.394 "config": [] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "scheduler", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "framework_set_scheduler", 00:24:37.394 "params": { 00:24:37.394 "name": "static" 00:24:37.394 } 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "subsystem": "nvmf", 00:24:37.394 "config": [ 00:24:37.394 { 00:24:37.394 "method": "nvmf_set_config", 00:24:37.394 "params": { 00:24:37.394 "discovery_filter": "match_any", 00:24:37.394 "admin_cmd_passthru": { 00:24:37.394 "identify_ctrlr": false 00:24:37.394 }, 00:24:37.394 "dhchap_digests": [ 00:24:37.394 "sha256", 00:24:37.394 "sha384", 00:24:37.394 "sha512" 00:24:37.394 ], 00:24:37.394 "dhchap_dhgroups": [ 00:24:37.394 "null", 00:24:37.394 "ffdhe2048", 00:24:37.394 "ffdhe3072", 00:24:37.394 "ffdhe4096", 00:24:37.394 "ffdhe6144", 00:24:37.394 "ffdhe8192" 00:24:37.394 ] 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_set_max_subsystems", 00:24:37.394 "params": { 00:24:37.394 "max_subsystems": 1024 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_set_crdt", 00:24:37.394 "params": { 00:24:37.394 "crdt1": 0, 00:24:37.394 "crdt2": 0, 00:24:37.394 "crdt3": 0 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_create_transport", 00:24:37.394 "params": { 00:24:37.394 "trtype": "TCP", 00:24:37.394 "max_queue_depth": 128, 00:24:37.394 "max_io_qpairs_per_ctrlr": 127, 00:24:37.394 "in_capsule_data_size": 4096, 00:24:37.394 "max_io_size": 131072, 00:24:37.394 "io_unit_size": 131072, 00:24:37.394 "max_aq_depth": 128, 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:37.394 "num_shared_buffers": 511, 00:24:37.394 "buf_cache_size": 4294967295, 00:24:37.394 "dif_insert_or_strip": false, 00:24:37.394 "zcopy": false, 00:24:37.394 "c2h_success": false, 00:24:37.394 "sock_priority": 0, 00:24:37.394 "abort_timeout_sec": 1, 00:24:37.394 "ack_timeout": 0, 00:24:37.394 "data_wr_pool_size": 0 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_create_subsystem", 00:24:37.394 "params": { 00:24:37.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.394 "allow_any_host": false, 00:24:37.394 "serial_number": "00000000000000000000", 00:24:37.394 "model_number": "SPDK bdev Controller", 00:24:37.394 "max_namespaces": 32, 00:24:37.394 "min_cntlid": 1, 00:24:37.394 "max_cntlid": 65519, 00:24:37.394 "ana_reporting": false 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_subsystem_add_host", 00:24:37.394 "params": { 00:24:37.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.394 "host": "nqn.2016-06.io.spdk:host1", 00:24:37.394 "psk": "key0" 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_subsystem_add_ns", 00:24:37.394 "params": { 00:24:37.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.394 "namespace": { 00:24:37.394 "nsid": 1, 00:24:37.394 "bdev_name": "malloc0", 00:24:37.394 "nguid": "837B2FDAA26E4BE783FAABC7C63C19ED", 00:24:37.394 "uuid": "837b2fda-a26e-4be7-83fa-abc7c63c19ed", 00:24:37.394 "no_auto_visible": false 00:24:37.394 } 00:24:37.394 } 00:24:37.394 }, 00:24:37.394 { 00:24:37.394 "method": "nvmf_subsystem_add_listener", 00:24:37.394 "params": { 00:24:37.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.394 "listen_address": { 00:24:37.394 "trtype": "TCP", 00:24:37.394 "adrfam": "IPv4", 00:24:37.394 "traddr": "10.0.0.2", 00:24:37.394 "trsvcid": "4420" 00:24:37.394 }, 00:24:37.394 "secure_channel": false, 00:24:37.394 "sock_impl": "ssl" 00:24:37.394 } 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 } 00:24:37.394 ] 00:24:37.394 }' 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3508359 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3508359 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3508359 ']' 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:37.394 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.394 [2024-11-09 23:58:03.476421] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:37.394 [2024-11-09 23:58:03.476561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.652 [2024-11-09 23:58:03.617229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.652 [2024-11-09 23:58:03.741190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.652 [2024-11-09 23:58:03.741288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.652 [2024-11-09 23:58:03.741313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.652 [2024-11-09 23:58:03.741337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.652 [2024-11-09 23:58:03.741356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.652 [2024-11-09 23:58:03.743090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.218 [2024-11-09 23:58:04.292487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.218 [2024-11-09 23:58:04.324520] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.218 [2024-11-09 23:58:04.324847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.475 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3508509 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3508509 /var/tmp/bdevperf.sock 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3508509 ']' 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.476 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:38.476 "subsystems": [ 00:24:38.476 { 00:24:38.476 "subsystem": "keyring", 00:24:38.476 "config": [ 00:24:38.476 { 00:24:38.476 "method": "keyring_file_add_key", 00:24:38.476 "params": { 00:24:38.476 "name": "key0", 00:24:38.476 "path": "/tmp/tmp.WSVA3YIoA1" 00:24:38.476 } 00:24:38.476 } 00:24:38.476 ] 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "subsystem": "iobuf", 00:24:38.476 "config": [ 00:24:38.476 { 00:24:38.476 "method": "iobuf_set_options", 00:24:38.476 "params": { 00:24:38.476 "small_pool_count": 8192, 00:24:38.476 "large_pool_count": 1024, 00:24:38.476 "small_bufsize": 8192, 00:24:38.476 "large_bufsize": 135168, 00:24:38.476 "enable_numa": false 00:24:38.476 } 00:24:38.476 } 00:24:38.476 ] 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "subsystem": "sock", 00:24:38.476 "config": [ 00:24:38.476 { 00:24:38.476 "method": "sock_set_default_impl", 00:24:38.476 "params": { 00:24:38.476 "impl_name": "posix" 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "sock_impl_set_options", 00:24:38.476 "params": { 00:24:38.476 "impl_name": "ssl", 00:24:38.476 "recv_buf_size": 4096, 00:24:38.476 "send_buf_size": 4096, 00:24:38.476 "enable_recv_pipe": true, 00:24:38.476 "enable_quickack": false, 00:24:38.476 "enable_placement_id": 0, 00:24:38.476 "enable_zerocopy_send_server": true, 00:24:38.476 "enable_zerocopy_send_client": false, 00:24:38.476 "zerocopy_threshold": 0, 00:24:38.476 "tls_version": 0, 00:24:38.476 "enable_ktls": false 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "sock_impl_set_options", 00:24:38.476 "params": { 00:24:38.476 "impl_name": "posix", 00:24:38.476 "recv_buf_size": 2097152, 00:24:38.476 "send_buf_size": 2097152, 00:24:38.476 "enable_recv_pipe": true, 00:24:38.476 "enable_quickack": false, 00:24:38.476 "enable_placement_id": 0, 00:24:38.476 "enable_zerocopy_send_server": true, 00:24:38.476 "enable_zerocopy_send_client": false, 00:24:38.476 "zerocopy_threshold": 0, 00:24:38.476 "tls_version": 0, 00:24:38.476 "enable_ktls": false 00:24:38.476 } 00:24:38.476 } 00:24:38.476 ] 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "subsystem": "vmd", 00:24:38.476 "config": [] 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "subsystem": "accel", 00:24:38.476 "config": [ 00:24:38.476 { 00:24:38.476 "method": "accel_set_options", 00:24:38.476 "params": { 00:24:38.476 "small_cache_size": 128, 00:24:38.476 "large_cache_size": 16, 00:24:38.476 "task_count": 2048, 00:24:38.476 "sequence_count": 2048, 00:24:38.476 "buf_count": 2048 00:24:38.476 } 00:24:38.476 } 00:24:38.476 ] 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "subsystem": "bdev", 00:24:38.476 "config": [ 00:24:38.476 { 00:24:38.476 "method": "bdev_set_options", 00:24:38.476 "params": { 00:24:38.476 "bdev_io_pool_size": 65535, 00:24:38.476 "bdev_io_cache_size": 256, 00:24:38.476 "bdev_auto_examine": true, 00:24:38.476 "iobuf_small_cache_size": 128, 00:24:38.476 "iobuf_large_cache_size": 16 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_raid_set_options", 00:24:38.476 "params": { 00:24:38.476 "process_window_size_kb": 1024, 00:24:38.476 "process_max_bandwidth_mb_sec": 0 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_iscsi_set_options", 00:24:38.476 "params": { 00:24:38.476 "timeout_sec": 30 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_nvme_set_options", 00:24:38.476 "params": { 00:24:38.476 "action_on_timeout": "none", 00:24:38.476 "timeout_us": 0, 00:24:38.476 "timeout_admin_us": 0, 00:24:38.476 "keep_alive_timeout_ms": 10000, 00:24:38.476 "arbitration_burst": 0, 00:24:38.476 "low_priority_weight": 0, 00:24:38.476 "medium_priority_weight": 0, 00:24:38.476 "high_priority_weight": 0, 00:24:38.476 "nvme_adminq_poll_period_us": 10000, 00:24:38.476 "nvme_ioq_poll_period_us": 0, 00:24:38.476 "io_queue_requests": 512, 00:24:38.476 "delay_cmd_submit": true, 00:24:38.476 "transport_retry_count": 4, 00:24:38.476 "bdev_retry_count": 3, 00:24:38.476 "transport_ack_timeout": 0, 00:24:38.476 "ctrlr_loss_timeout_sec": 0, 00:24:38.476 "reconnect_delay_sec": 0, 00:24:38.476 "fast_io_fail_timeout_sec": 0, 00:24:38.476 "disable_auto_failback": false, 00:24:38.476 "generate_uuids": false, 00:24:38.476 "transport_tos": 0, 00:24:38.476 "nvme_error_stat": false, 00:24:38.476 "rdma_srq_size": 0, 00:24:38.476 "io_path_stat": false, 00:24:38.476 "allow_accel_sequence": false, 00:24:38.476 "rdma_max_cq_size": 0, 00:24:38.476 "rdma_cm_event_timeout_ms": 0, 00:24:38.476 "dhchap_digests": [ 00:24:38.476 "sha256", 00:24:38.476 "sha384", 00:24:38.476 "sha512" 00:24:38.476 ], 00:24:38.476 "dhchap_dhgroups": [ 00:24:38.476 "null", 00:24:38.476 "ffdhe2048", 00:24:38.476 "ffdhe3072", 00:24:38.476 "ffdhe4096", 00:24:38.476 "ffdhe6144", 00:24:38.476 "ffdhe8192" 00:24:38.476 ] 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_nvme_attach_controller", 00:24:38.476 "params": { 00:24:38.476 "name": "nvme0", 00:24:38.476 "trtype": "TCP", 00:24:38.476 "adrfam": "IPv4", 00:24:38.476 "traddr": "10.0.0.2", 00:24:38.476 "trsvcid": "4420", 00:24:38.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.476 "prchk_reftag": false, 00:24:38.476 "prchk_guard": false, 00:24:38.476 "ctrlr_loss_timeout_sec": 0, 00:24:38.476 "reconnect_delay_sec": 0, 00:24:38.476 "fast_io_fail_timeout_sec": 0, 00:24:38.476 "psk": "key0", 00:24:38.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.476 "hdgst": false, 00:24:38.476 "ddgst": false, 00:24:38.476 "multipath": "multipath" 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_nvme_set_hotplug", 00:24:38.476 "params": { 00:24:38.476 "period_us": 100000, 00:24:38.476 "enable": false 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_enable_histogram", 00:24:38.476 "params": { 00:24:38.476 "name": "nvme0n1", 00:24:38.476 "enable": true 00:24:38.476 } 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "method": "bdev_wait_for_examine" 00:24:38.476 } 00:24:38.476 ] 00:24:38.476 }, 00:24:38.476 { 00:24:38.476 "subsystem": "nbd", 00:24:38.476 "config": [] 00:24:38.476 } 00:24:38.477 ] 00:24:38.477 }' 00:24:38.477 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.477 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.477 [2024-11-09 23:58:04.557804] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:38.477 [2024-11-09 23:58:04.557936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508509 ] 00:24:38.735 [2024-11-09 23:58:04.701765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.735 [2024-11-09 23:58:04.841671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.302 [2024-11-09 23:58:05.274792] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.560 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.560 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:39.560 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.560 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:39.818 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.818 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.818 Running I/O for 1 seconds... 00:24:41.192 2543.00 IOPS, 9.93 MiB/s 00:24:41.192 Latency(us) 00:24:41.192 [2024-11-09T22:58:07.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.192 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:41.192 Verification LBA range: start 0x0 length 0x2000 00:24:41.192 nvme0n1 : 1.04 2569.26 10.04 0.00 0.00 49018.21 11262.48 39807.05 00:24:41.192 [2024-11-09T22:58:07.393Z] =================================================================================================================== 00:24:41.192 [2024-11-09T22:58:07.393Z] Total : 2569.26 10.04 0.00 0.00 49018.21 11262.48 39807.05 00:24:41.192 { 00:24:41.192 "results": [ 00:24:41.192 { 00:24:41.192 "job": "nvme0n1", 00:24:41.192 "core_mask": "0x2", 00:24:41.192 "workload": "verify", 00:24:41.192 "status": "finished", 00:24:41.192 "verify_range": { 00:24:41.192 "start": 0, 00:24:41.192 "length": 8192 00:24:41.192 }, 00:24:41.192 "queue_depth": 128, 00:24:41.192 "io_size": 4096, 00:24:41.192 "runtime": 1.039989, 00:24:41.192 "iops": 2569.2579440744084, 00:24:41.192 "mibps": 10.036163844040658, 00:24:41.192 "io_failed": 0, 00:24:41.192 "io_timeout": 0, 00:24:41.192 "avg_latency_us": 49018.21005100909, 00:24:41.192 "min_latency_us": 11262.482962962962, 00:24:41.192 "max_latency_us": 39807.05185185185 00:24:41.192 } 00:24:41.192 ], 00:24:41.192 "core_count": 1 00:24:41.192 } 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:41.192 nvmf_trace.0 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3508509 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3508509 ']' 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3508509 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3508509 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3508509' 00:24:41.192 killing process with pid 3508509 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3508509 00:24:41.192 Received shutdown signal, test time was about 1.000000 seconds 00:24:41.192 00:24:41.192 Latency(us) 00:24:41.192 [2024-11-09T22:58:07.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.192 [2024-11-09T22:58:07.393Z] =================================================================================================================== 00:24:41.192 [2024-11-09T22:58:07.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.192 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3508509 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.126 rmmod nvme_tcp 00:24:42.126 rmmod nvme_fabrics 00:24:42.126 rmmod nvme_keyring 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3508359 ']' 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3508359 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3508359 ']' 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3508359 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3508359 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3508359' 00:24:42.126 killing process with pid 3508359 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3508359 00:24:42.126 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3508359 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.502 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.407 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.407 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.KQvZkzf5QJ /tmp/tmp.ltpAvppU6T /tmp/tmp.WSVA3YIoA1 00:24:45.407 00:24:45.407 real 1m53.389s 00:24:45.407 user 3m8.112s 00:24:45.407 sys 0m27.263s 00:24:45.407 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:45.407 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.407 ************************************ 00:24:45.407 END TEST nvmf_tls 00:24:45.407 ************************************ 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:45.408 ************************************ 00:24:45.408 START TEST nvmf_fips 00:24:45.408 ************************************ 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:45.408 * Looking for test storage... 00:24:45.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:45.408 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:45.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.667 --rc genhtml_branch_coverage=1 00:24:45.667 --rc genhtml_function_coverage=1 00:24:45.667 --rc genhtml_legend=1 00:24:45.667 --rc geninfo_all_blocks=1 00:24:45.667 --rc geninfo_unexecuted_blocks=1 00:24:45.667 00:24:45.667 ' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:45.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.667 --rc genhtml_branch_coverage=1 00:24:45.667 --rc genhtml_function_coverage=1 00:24:45.667 --rc genhtml_legend=1 00:24:45.667 --rc geninfo_all_blocks=1 00:24:45.667 --rc geninfo_unexecuted_blocks=1 00:24:45.667 00:24:45.667 ' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:45.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.667 --rc genhtml_branch_coverage=1 00:24:45.667 --rc genhtml_function_coverage=1 00:24:45.667 --rc genhtml_legend=1 00:24:45.667 --rc geninfo_all_blocks=1 00:24:45.667 --rc geninfo_unexecuted_blocks=1 00:24:45.667 00:24:45.667 ' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:45.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.667 --rc genhtml_branch_coverage=1 00:24:45.667 --rc genhtml_function_coverage=1 00:24:45.667 --rc genhtml_legend=1 00:24:45.667 --rc geninfo_all_blocks=1 00:24:45.667 --rc geninfo_unexecuted_blocks=1 00:24:45.667 00:24:45.667 ' 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.667 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:45.668 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:45.669 Error setting digest 00:24:45.669 40D2A1F2A67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:45.669 40D2A1F2A67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.669 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.570 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.570 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.570 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.570 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:47.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:47.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:47.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:47.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.571 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:24:47.830 00:24:47.830 --- 10.0.0.2 ping statistics --- 00:24:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.830 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:24:47.830 00:24:47.830 --- 10.0.0.1 ping statistics --- 00:24:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.830 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3511013 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3511013 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3511013 ']' 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:47.830 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:47.830 [2024-11-09 23:58:14.000218] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:47.830 [2024-11-09 23:58:14.000378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.089 [2024-11-09 23:58:14.153939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.347 [2024-11-09 23:58:14.296039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.347 [2024-11-09 23:58:14.296117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.347 [2024-11-09 23:58:14.296144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.347 [2024-11-09 23:58:14.296168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.347 [2024-11-09 23:58:14.296189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.347 [2024-11-09 23:58:14.297810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.34v 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:48.914 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.34v 00:24:48.914 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.34v 00:24:48.914 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.34v 00:24:48.914 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.172 [2024-11-09 23:58:15.304309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.172 [2024-11-09 23:58:15.320212] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:49.172 [2024-11-09 23:58:15.320524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.430 malloc0 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3511290 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3511290 /var/tmp/bdevperf.sock 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3511290 ']' 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.430 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:49.430 [2024-11-09 23:58:15.525982] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:24:49.430 [2024-11-09 23:58:15.526149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511290 ] 00:24:49.688 [2024-11-09 23:58:15.659492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.688 [2024-11-09 23:58:15.779529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.622 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.622 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:50.622 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.34v 00:24:50.622 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.879 [2024-11-09 23:58:17.018336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.138 TLSTESTn1 00:24:51.138 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.138 Running I/O for 10 seconds... 00:24:53.448 2459.00 IOPS, 9.61 MiB/s [2024-11-09T22:58:20.580Z] 2516.00 IOPS, 9.83 MiB/s [2024-11-09T22:58:21.526Z] 2534.33 IOPS, 9.90 MiB/s [2024-11-09T22:58:22.459Z] 2547.50 IOPS, 9.95 MiB/s [2024-11-09T22:58:23.393Z] 2552.60 IOPS, 9.97 MiB/s [2024-11-09T22:58:24.326Z] 2562.33 IOPS, 10.01 MiB/s [2024-11-09T22:58:25.700Z] 2565.71 IOPS, 10.02 MiB/s [2024-11-09T22:58:26.301Z] 2567.75 IOPS, 10.03 MiB/s [2024-11-09T22:58:27.700Z] 2570.33 IOPS, 10.04 MiB/s [2024-11-09T22:58:27.700Z] 2571.70 IOPS, 10.05 MiB/s 00:25:01.499 Latency(us) 00:25:01.499 [2024-11-09T22:58:27.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.499 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.499 Verification LBA range: start 0x0 length 0x2000 00:25:01.499 TLSTESTn1 : 10.02 2578.03 10.07 0.00 0.00 49563.33 8883.77 45438.29 00:25:01.499 [2024-11-09T22:58:27.700Z] =================================================================================================================== 00:25:01.499 [2024-11-09T22:58:27.700Z] Total : 2578.03 10.07 0.00 0.00 49563.33 8883.77 45438.29 00:25:01.499 { 00:25:01.499 "results": [ 00:25:01.499 { 00:25:01.499 "job": "TLSTESTn1", 00:25:01.499 "core_mask": "0x4", 00:25:01.499 "workload": "verify", 00:25:01.499 "status": "finished", 00:25:01.499 "verify_range": { 00:25:01.499 "start": 0, 00:25:01.499 "length": 8192 00:25:01.499 }, 00:25:01.499 "queue_depth": 128, 00:25:01.499 "io_size": 4096, 00:25:01.499 "runtime": 10.024726, 00:25:01.499 "iops": 2578.0255739658123, 00:25:01.499 "mibps": 10.070412398303954, 00:25:01.499 "io_failed": 0, 00:25:01.499 "io_timeout": 0, 00:25:01.499 "avg_latency_us": 49563.33422621198, 00:25:01.499 "min_latency_us": 8883.76888888889, 00:25:01.499 "max_latency_us": 45438.293333333335 00:25:01.499 } 00:25:01.499 ], 00:25:01.499 "core_count": 1 00:25:01.499 } 00:25:01.499 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:01.499 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:01.499 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:25:01.499 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:25:01.499 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:01.500 nvmf_trace.0 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3511290 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3511290 ']' 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3511290 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3511290 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3511290' 00:25:01.500 killing process with pid 3511290 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3511290 00:25:01.500 Received shutdown signal, test time was about 10.000000 seconds 00:25:01.500 00:25:01.500 Latency(us) 00:25:01.500 [2024-11-09T22:58:27.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.500 [2024-11-09T22:58:27.701Z] =================================================================================================================== 00:25:01.500 [2024-11-09T22:58:27.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.500 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3511290 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.065 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.065 rmmod nvme_tcp 00:25:02.065 rmmod nvme_fabrics 00:25:02.065 rmmod nvme_keyring 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3511013 ']' 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3511013 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3511013 ']' 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3511013 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3511013 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3511013' 00:25:02.323 killing process with pid 3511013 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3511013 00:25:02.323 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3511013 00:25:03.696 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.696 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.696 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.697 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.34v 00:25:05.596 00:25:05.596 real 0m20.158s 00:25:05.596 user 0m27.775s 00:25:05.596 sys 0m5.267s 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:05.596 ************************************ 00:25:05.596 END TEST nvmf_fips 00:25:05.596 ************************************ 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.596 ************************************ 00:25:05.596 START TEST nvmf_control_msg_list 00:25:05.596 ************************************ 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.596 * Looking for test storage... 00:25:05.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:25:05.596 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.855 --rc genhtml_branch_coverage=1 00:25:05.855 --rc genhtml_function_coverage=1 00:25:05.855 --rc genhtml_legend=1 00:25:05.855 --rc geninfo_all_blocks=1 00:25:05.855 --rc geninfo_unexecuted_blocks=1 00:25:05.855 00:25:05.855 ' 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:05.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.855 --rc genhtml_branch_coverage=1 00:25:05.855 --rc genhtml_function_coverage=1 00:25:05.855 --rc genhtml_legend=1 00:25:05.855 --rc geninfo_all_blocks=1 00:25:05.855 --rc geninfo_unexecuted_blocks=1 00:25:05.855 00:25:05.855 ' 00:25:05.855 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:05.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.856 --rc genhtml_branch_coverage=1 00:25:05.856 --rc genhtml_function_coverage=1 00:25:05.856 --rc genhtml_legend=1 00:25:05.856 --rc geninfo_all_blocks=1 00:25:05.856 --rc geninfo_unexecuted_blocks=1 00:25:05.856 00:25:05.856 ' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:05.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.856 --rc genhtml_branch_coverage=1 00:25:05.856 --rc genhtml_function_coverage=1 00:25:05.856 --rc genhtml_legend=1 00:25:05.856 --rc geninfo_all_blocks=1 00:25:05.856 --rc geninfo_unexecuted_blocks=1 00:25:05.856 00:25:05.856 ' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.856 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.758 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.759 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:25:08.017 00:25:08.017 --- 10.0.0.2 ping statistics --- 00:25:08.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.017 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:08.017 00:25:08.017 --- 10.0.0.1 ping statistics --- 00:25:08.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.017 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.017 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3514822 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3514822 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3514822 ']' 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:08.017 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.017 [2024-11-09 23:58:34.096080] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:25:08.018 [2024-11-09 23:58:34.096228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.276 [2024-11-09 23:58:34.237033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.276 [2024-11-09 23:58:34.360295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.276 [2024-11-09 23:58:34.360383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.276 [2024-11-09 23:58:34.360405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.276 [2024-11-09 23:58:34.360425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.276 [2024-11-09 23:58:34.360441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.276 [2024-11-09 23:58:34.361902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.842 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.842 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:25:08.842 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.842 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.842 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.100 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.100 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.101 [2024-11-09 23:58:35.061568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.101 Malloc0 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:09.101 [2024-11-09 23:58:35.131884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3514971 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3514972 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3514973 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3514971 00:25:09.101 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.101 [2024-11-09 23:58:35.262519] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:09.101 [2024-11-09 23:58:35.263037] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:09.101 [2024-11-09 23:58:35.263493] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:10.475 Initializing NVMe Controllers 00:25:10.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:10.475 Initialization complete. Launching workers. 00:25:10.475 ======================================================== 00:25:10.475 Latency(us) 00:25:10.475 Device Information : IOPS MiB/s Average min max 00:25:10.475 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2956.00 11.55 337.69 218.56 1162.50 00:25:10.475 ======================================================== 00:25:10.475 Total : 2956.00 11.55 337.69 218.56 1162.50 00:25:10.475 00:25:10.475 Initializing NVMe Controllers 00:25:10.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:10.475 Initialization complete. Launching workers. 00:25:10.475 ======================================================== 00:25:10.475 Latency(us) 00:25:10.475 Device Information : IOPS MiB/s Average min max 00:25:10.475 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41115.27 40711.82 42262.55 00:25:10.475 ======================================================== 00:25:10.475 Total : 25.00 0.10 41115.27 40711.82 42262.55 00:25:10.475 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3514972 00:25:10.475 Initializing NVMe Controllers 00:25:10.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:10.475 Initialization complete. Launching workers. 00:25:10.475 ======================================================== 00:25:10.475 Latency(us) 00:25:10.475 Device Information : IOPS MiB/s Average min max 00:25:10.475 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2837.99 11.09 351.82 241.37 743.30 00:25:10.475 ======================================================== 00:25:10.475 Total : 2837.99 11.09 351.82 241.37 743.30 00:25:10.475 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3514973 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.475 rmmod nvme_tcp 00:25:10.475 rmmod nvme_fabrics 00:25:10.475 rmmod nvme_keyring 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3514822 ']' 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3514822 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3514822 ']' 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3514822 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3514822 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3514822' 00:25:10.475 killing process with pid 3514822 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3514822 00:25:10.475 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3514822 00:25:11.849 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.849 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.849 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.850 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.754 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.754 00:25:13.754 real 0m8.236s 00:25:13.754 user 0m7.668s 00:25:13.754 sys 0m2.934s 00:25:13.754 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:13.754 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:13.754 ************************************ 00:25:13.754 END TEST nvmf_control_msg_list 00:25:13.754 ************************************ 00:25:14.013 23:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:14.013 23:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:14.013 23:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:14.013 23:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:14.013 ************************************ 00:25:14.013 START TEST nvmf_wait_for_buf 00:25:14.013 ************************************ 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:14.013 * Looking for test storage... 00:25:14.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:14.013 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:14.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.014 --rc genhtml_branch_coverage=1 00:25:14.014 --rc genhtml_function_coverage=1 00:25:14.014 --rc genhtml_legend=1 00:25:14.014 --rc geninfo_all_blocks=1 00:25:14.014 --rc geninfo_unexecuted_blocks=1 00:25:14.014 00:25:14.014 ' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:14.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.014 --rc genhtml_branch_coverage=1 00:25:14.014 --rc genhtml_function_coverage=1 00:25:14.014 --rc genhtml_legend=1 00:25:14.014 --rc geninfo_all_blocks=1 00:25:14.014 --rc geninfo_unexecuted_blocks=1 00:25:14.014 00:25:14.014 ' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:14.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.014 --rc genhtml_branch_coverage=1 00:25:14.014 --rc genhtml_function_coverage=1 00:25:14.014 --rc genhtml_legend=1 00:25:14.014 --rc geninfo_all_blocks=1 00:25:14.014 --rc geninfo_unexecuted_blocks=1 00:25:14.014 00:25:14.014 ' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:14.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.014 --rc genhtml_branch_coverage=1 00:25:14.014 --rc genhtml_function_coverage=1 00:25:14.014 --rc genhtml_legend=1 00:25:14.014 --rc geninfo_all_blocks=1 00:25:14.014 --rc geninfo_unexecuted_blocks=1 00:25:14.014 00:25:14.014 ' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.014 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.015 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:16.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:16.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:16.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:16.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.547 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:25:16.548 00:25:16.548 --- 10.0.0.2 ping statistics --- 00:25:16.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.548 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:16.548 00:25:16.548 --- 10.0.0.1 ping statistics --- 00:25:16.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.548 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3517183 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3517183 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3517183 ']' 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:16.548 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.548 [2024-11-09 23:58:42.449020] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:25:16.548 [2024-11-09 23:58:42.449170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.548 [2024-11-09 23:58:42.604039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.548 [2024-11-09 23:58:42.742923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.548 [2024-11-09 23:58:42.743014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.548 [2024-11-09 23:58:42.743040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.548 [2024-11-09 23:58:42.743065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.548 [2024-11-09 23:58:42.743084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.548 [2024-11-09 23:58:42.744742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.483 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 Malloc0 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 [2024-11-09 23:58:43.775001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 [2024-11-09 23:58:43.799240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.742 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:18.000 [2024-11-09 23:58:43.954783] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:19.373 Initializing NVMe Controllers 00:25:19.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:19.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:19.373 Initialization complete. Launching workers. 00:25:19.373 ======================================================== 00:25:19.373 Latency(us) 00:25:19.373 Device Information : IOPS MiB/s Average min max 00:25:19.373 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 39.86 4.98 104507.43 31905.68 191470.06 00:25:19.373 ======================================================== 00:25:19.373 Total : 39.86 4.98 104507.43 31905.68 191470.06 00:25:19.373 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=614 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 614 -eq 0 ]] 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.631 rmmod nvme_tcp 00:25:19.631 rmmod nvme_fabrics 00:25:19.631 rmmod nvme_keyring 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3517183 ']' 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3517183 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3517183 ']' 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3517183 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3517183 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3517183' 00:25:19.631 killing process with pid 3517183 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3517183 00:25:19.631 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3517183 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.006 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.908 00:25:22.908 real 0m8.875s 00:25:22.908 user 0m5.421s 00:25:22.908 sys 0m2.242s 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.908 ************************************ 00:25:22.908 END TEST nvmf_wait_for_buf 00:25:22.908 ************************************ 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.908 ************************************ 00:25:22.908 START TEST nvmf_fuzz 00:25:22.908 ************************************ 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:22.908 * Looking for test storage... 00:25:22.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:22.908 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.908 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.909 --rc genhtml_branch_coverage=1 00:25:22.909 --rc genhtml_function_coverage=1 00:25:22.909 --rc genhtml_legend=1 00:25:22.909 --rc geninfo_all_blocks=1 00:25:22.909 --rc geninfo_unexecuted_blocks=1 00:25:22.909 00:25:22.909 ' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.909 --rc genhtml_branch_coverage=1 00:25:22.909 --rc genhtml_function_coverage=1 00:25:22.909 --rc genhtml_legend=1 00:25:22.909 --rc geninfo_all_blocks=1 00:25:22.909 --rc geninfo_unexecuted_blocks=1 00:25:22.909 00:25:22.909 ' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.909 --rc genhtml_branch_coverage=1 00:25:22.909 --rc genhtml_function_coverage=1 00:25:22.909 --rc genhtml_legend=1 00:25:22.909 --rc geninfo_all_blocks=1 00:25:22.909 --rc geninfo_unexecuted_blocks=1 00:25:22.909 00:25:22.909 ' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:22.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.909 --rc genhtml_branch_coverage=1 00:25:22.909 --rc genhtml_function_coverage=1 00:25:22.909 --rc genhtml_legend=1 00:25:22.909 --rc geninfo_all_blocks=1 00:25:22.909 --rc geninfo_unexecuted_blocks=1 00:25:22.909 00:25:22.909 ' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.909 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:25.437 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.437 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:25.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:25.438 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:25.438 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:25.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:25:25.438 00:25:25.438 --- 10.0.0.2 ping statistics --- 00:25:25.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.438 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:25:25.438 00:25:25.438 --- 10.0.0.1 ping statistics --- 00:25:25.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.438 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3519669 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3519669 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3519669 ']' 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:25.438 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.372 Malloc0 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:26.372 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:58.434 Fuzzing completed. Shutting down the fuzz application 00:25:58.434 00:25:58.434 Dumping successful admin opcodes: 00:25:58.434 8, 9, 10, 24, 00:25:58.434 Dumping successful io opcodes: 00:25:58.434 0, 9, 00:25:58.434 NS: 0x2000008efec0 I/O qp, Total commands completed: 328760, total successful commands: 1946, random_seed: 988758080 00:25:58.434 NS: 0x2000008efec0 admin qp, Total commands completed: 40591, total successful commands: 332, random_seed: 3207777856 00:25:58.434 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:59.000 Fuzzing completed. Shutting down the fuzz application 00:25:59.000 00:25:59.000 Dumping successful admin opcodes: 00:25:59.000 24, 00:25:59.000 Dumping successful io opcodes: 00:25:59.000 00:25:59.000 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2215929548 00:25:59.000 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2216158220 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.000 rmmod nvme_tcp 00:25:59.000 rmmod nvme_fabrics 00:25:59.000 rmmod nvme_keyring 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3519669 ']' 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3519669 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3519669 ']' 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 3519669 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3519669 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3519669' 00:25:59.000 killing process with pid 3519669 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 3519669 00:25:59.000 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 3519669 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.376 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:02.282 00:26:02.282 real 0m39.514s 00:26:02.282 user 0m56.556s 00:26:02.282 sys 0m13.604s 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:02.282 ************************************ 00:26:02.282 END TEST nvmf_fuzz 00:26:02.282 ************************************ 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:02.282 23:59:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:02.541 ************************************ 00:26:02.542 START TEST nvmf_multiconnection 00:26:02.542 ************************************ 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:02.542 * Looking for test storage... 00:26:02.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:02.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.542 --rc genhtml_branch_coverage=1 00:26:02.542 --rc genhtml_function_coverage=1 00:26:02.542 --rc genhtml_legend=1 00:26:02.542 --rc geninfo_all_blocks=1 00:26:02.542 --rc geninfo_unexecuted_blocks=1 00:26:02.542 00:26:02.542 ' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:02.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.542 --rc genhtml_branch_coverage=1 00:26:02.542 --rc genhtml_function_coverage=1 00:26:02.542 --rc genhtml_legend=1 00:26:02.542 --rc geninfo_all_blocks=1 00:26:02.542 --rc geninfo_unexecuted_blocks=1 00:26:02.542 00:26:02.542 ' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:02.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.542 --rc genhtml_branch_coverage=1 00:26:02.542 --rc genhtml_function_coverage=1 00:26:02.542 --rc genhtml_legend=1 00:26:02.542 --rc geninfo_all_blocks=1 00:26:02.542 --rc geninfo_unexecuted_blocks=1 00:26:02.542 00:26:02.542 ' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:02.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.542 --rc genhtml_branch_coverage=1 00:26:02.542 --rc genhtml_function_coverage=1 00:26:02.542 --rc genhtml_legend=1 00:26:02.542 --rc geninfo_all_blocks=1 00:26:02.542 --rc geninfo_unexecuted_blocks=1 00:26:02.542 00:26:02.542 ' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.542 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.543 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.497 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:04.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:26:04.756 00:26:04.756 --- 10.0.0.2 ping statistics --- 00:26:04.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.756 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:26:04.756 00:26:04.756 --- 10.0.0.1 ping statistics --- 00:26:04.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.756 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:04.756 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3525559 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3525559 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 3525559 ']' 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:04.757 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.015 [2024-11-09 23:59:30.993772] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:26:05.016 [2024-11-09 23:59:30.993936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.016 [2024-11-09 23:59:31.155695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.274 [2024-11-09 23:59:31.290787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.274 [2024-11-09 23:59:31.290850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.274 [2024-11-09 23:59:31.290892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.274 [2024-11-09 23:59:31.290913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.274 [2024-11-09 23:59:31.290930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.274 [2024-11-09 23:59:31.293340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.274 [2024-11-09 23:59:31.293384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.274 [2024-11-09 23:59:31.293448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.274 [2024-11-09 23:59:31.293454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.840 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:05.840 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:26:05.840 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:05.840 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:05.840 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 [2024-11-09 23:59:32.054865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 Malloc1 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 [2024-11-09 23:59:32.186146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 Malloc2 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.099 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 Malloc3 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 Malloc4 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.358 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 Malloc5 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 Malloc6 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 Malloc7 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.617 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 Malloc8 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 Malloc9 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.876 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 Malloc10 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 Malloc11 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.135 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:07.702 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:07.702 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:07.702 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.702 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:07.702 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.230 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:10.488 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:10.488 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:10.488 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.488 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:10.488 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.014 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:13.273 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:13.273 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:13.273 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.273 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:13.273 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:15.170 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:15.170 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:15.170 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:26:15.170 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:15.171 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.171 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:15.171 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.171 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:16.105 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:16.105 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:16.105 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.105 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:16.105 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.004 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:18.570 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:18.570 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:18.570 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.570 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:18.570 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:20.469 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.727 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:21.293 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:21.293 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:21.293 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.293 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:21.293 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.821 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:24.078 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:24.078 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:24.078 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.078 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:24.078 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.607 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:27.174 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:27.174 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:27.174 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.174 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:27.174 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.075 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:30.010 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:30.010 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:30.010 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.010 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:30.010 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.911 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:32.846 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:32.846 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:32.846 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.846 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:32.846 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.745 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:35.680 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:35.680 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:35.680 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.680 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:35.680 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:37.577 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:37.577 [global] 00:26:37.577 thread=1 00:26:37.577 invalidate=1 00:26:37.577 rw=read 00:26:37.577 time_based=1 00:26:37.577 runtime=10 00:26:37.577 ioengine=libaio 00:26:37.577 direct=1 00:26:37.577 bs=262144 00:26:37.577 iodepth=64 00:26:37.577 norandommap=1 00:26:37.577 numjobs=1 00:26:37.577 00:26:37.577 [job0] 00:26:37.577 filename=/dev/nvme0n1 00:26:37.577 [job1] 00:26:37.577 filename=/dev/nvme10n1 00:26:37.577 [job2] 00:26:37.577 filename=/dev/nvme1n1 00:26:37.577 [job3] 00:26:37.577 filename=/dev/nvme2n1 00:26:37.577 [job4] 00:26:37.577 filename=/dev/nvme3n1 00:26:37.577 [job5] 00:26:37.577 filename=/dev/nvme4n1 00:26:37.577 [job6] 00:26:37.577 filename=/dev/nvme5n1 00:26:37.577 [job7] 00:26:37.577 filename=/dev/nvme6n1 00:26:37.577 [job8] 00:26:37.577 filename=/dev/nvme7n1 00:26:37.577 [job9] 00:26:37.577 filename=/dev/nvme8n1 00:26:37.577 [job10] 00:26:37.577 filename=/dev/nvme9n1 00:26:37.834 Could not set queue depth (nvme0n1) 00:26:37.834 Could not set queue depth (nvme10n1) 00:26:37.834 Could not set queue depth (nvme1n1) 00:26:37.834 Could not set queue depth (nvme2n1) 00:26:37.834 Could not set queue depth (nvme3n1) 00:26:37.834 Could not set queue depth (nvme4n1) 00:26:37.834 Could not set queue depth (nvme5n1) 00:26:37.834 Could not set queue depth (nvme6n1) 00:26:37.834 Could not set queue depth (nvme7n1) 00:26:37.834 Could not set queue depth (nvme8n1) 00:26:37.834 Could not set queue depth (nvme9n1) 00:26:37.834 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:37.834 fio-3.35 00:26:37.834 Starting 11 threads 00:26:50.117 00:26:50.117 job0: (groupid=0, jobs=1): err= 0: pid=3530055: Sun Nov 10 00:00:14 2024 00:26:50.117 read: IOPS=431, BW=108MiB/s (113MB/s)(1087MiB/10066msec) 00:26:50.117 slat (usec): min=10, max=339215, avg=1906.94, stdev=11413.63 00:26:50.117 clat (msec): min=24, max=807, avg=146.19, stdev=144.26 00:26:50.117 lat (msec): min=24, max=935, avg=148.10, stdev=146.24 00:26:50.117 clat percentiles (msec): 00:26:50.117 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 51], 00:26:50.117 | 30.00th=[ 54], 40.00th=[ 83], 50.00th=[ 115], 60.00th=[ 131], 00:26:50.117 | 70.00th=[ 142], 80.00th=[ 174], 90.00th=[ 313], 95.00th=[ 531], 00:26:50.117 | 99.00th=[ 735], 99.50th=[ 751], 99.90th=[ 760], 99.95th=[ 760], 00:26:50.117 | 99.99th=[ 810] 00:26:50.117 bw ( KiB/s): min=18432, max=317440, per=15.35%, avg=109631.25, stdev=81317.11, samples=20 00:26:50.117 iops : min= 72, max= 1240, avg=428.20, stdev=317.63, samples=20 00:26:50.117 lat (msec) : 50=19.65%, 100=25.82%, 250=41.97%, 500=7.52%, 750=4.67% 00:26:50.117 lat (msec) : 1000=0.37% 00:26:50.117 cpu : usr=0.28%, sys=1.64%, ctx=716, majf=0, minf=4097 00:26:50.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:50.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.117 issued rwts: total=4346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.117 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.117 job1: (groupid=0, jobs=1): err= 0: pid=3530056: Sun Nov 10 00:00:14 2024 00:26:50.117 read: IOPS=221, BW=55.4MiB/s (58.1MB/s)(560MiB/10107msec) 00:26:50.117 slat (usec): min=12, max=306417, avg=4147.73, stdev=17012.49 00:26:50.117 clat (msec): min=5, max=1020, avg=284.25, stdev=169.19 00:26:50.117 lat (msec): min=5, max=1020, avg=288.40, stdev=171.50 00:26:50.117 clat percentiles (msec): 00:26:50.117 | 1.00th=[ 96], 5.00th=[ 122], 10.00th=[ 129], 20.00th=[ 140], 00:26:50.117 | 30.00th=[ 167], 40.00th=[ 207], 50.00th=[ 262], 60.00th=[ 296], 00:26:50.117 | 70.00th=[ 326], 80.00th=[ 376], 90.00th=[ 451], 95.00th=[ 667], 00:26:50.117 | 99.00th=[ 927], 99.50th=[ 927], 99.90th=[ 1020], 99.95th=[ 1020], 00:26:50.117 | 99.99th=[ 1020] 00:26:50.117 bw ( KiB/s): min=11776, max=129536, per=7.80%, avg=55725.15, stdev=31374.65, samples=20 00:26:50.117 iops : min= 46, max= 506, avg=217.65, stdev=122.55, samples=20 00:26:50.117 lat (msec) : 10=0.04%, 50=0.18%, 100=1.25%, 250=45.43%, 500=44.76% 00:26:50.117 lat (msec) : 750=5.04%, 1000=2.99%, 2000=0.31% 00:26:50.117 cpu : usr=0.13%, sys=0.83%, ctx=319, majf=0, minf=4097 00:26:50.117 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:50.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.117 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.117 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.117 job2: (groupid=0, jobs=1): err= 0: pid=3530057: Sun Nov 10 00:00:14 2024 00:26:50.117 read: IOPS=234, BW=58.5MiB/s (61.3MB/s)(592MiB/10111msec) 00:26:50.117 slat (usec): min=9, max=196447, avg=2277.26, stdev=11732.76 00:26:50.117 clat (msec): min=2, max=929, avg=271.03, stdev=173.51 00:26:50.117 lat (msec): min=2, max=929, avg=273.31, stdev=173.65 00:26:50.117 clat percentiles (msec): 00:26:50.117 | 1.00th=[ 39], 5.00th=[ 85], 10.00th=[ 97], 20.00th=[ 127], 00:26:50.117 | 30.00th=[ 161], 40.00th=[ 207], 50.00th=[ 247], 60.00th=[ 284], 00:26:50.117 | 70.00th=[ 317], 80.00th=[ 347], 90.00th=[ 485], 95.00th=[ 684], 00:26:50.117 | 99.00th=[ 844], 99.50th=[ 869], 99.90th=[ 911], 99.95th=[ 919], 00:26:50.117 | 99.99th=[ 927] 00:26:50.117 bw ( KiB/s): min=20992, max=129024, per=8.25%, avg=58924.15, stdev=28899.14, samples=20 00:26:50.117 iops : min= 82, max= 504, avg=230.15, stdev=112.88, samples=20 00:26:50.117 lat (msec) : 4=0.21%, 10=0.13%, 50=2.70%, 100=9.17%, 250=38.12% 00:26:50.117 lat (msec) : 500=40.49%, 750=5.54%, 1000=3.63% 00:26:50.117 cpu : usr=0.12%, sys=0.83%, ctx=454, majf=0, minf=4098 00:26:50.117 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:50.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.117 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.117 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.117 job3: (groupid=0, jobs=1): err= 0: pid=3530058: Sun Nov 10 00:00:14 2024 00:26:50.117 read: IOPS=220, BW=55.2MiB/s (57.8MB/s)(561MiB/10176msec) 00:26:50.117 slat (usec): min=10, max=586225, avg=3615.05, stdev=27595.60 00:26:50.118 clat (msec): min=2, max=1428, avg=286.28, stdev=283.51 00:26:50.118 lat (msec): min=2, max=1428, avg=289.90, stdev=287.47 00:26:50.118 clat percentiles (msec): 00:26:50.118 | 1.00th=[ 11], 5.00th=[ 54], 10.00th=[ 60], 20.00th=[ 97], 00:26:50.118 | 30.00th=[ 142], 40.00th=[ 165], 50.00th=[ 178], 60.00th=[ 192], 00:26:50.118 | 70.00th=[ 213], 80.00th=[ 456], 90.00th=[ 768], 95.00th=[ 969], 00:26:50.118 | 99.00th=[ 1217], 99.50th=[ 1250], 99.90th=[ 1351], 99.95th=[ 1351], 00:26:50.118 | 99.99th=[ 1435] 00:26:50.118 bw ( KiB/s): min= 5120, max=184320, per=7.82%, avg=55830.35, stdev=46321.10, samples=20 00:26:50.118 iops : min= 20, max= 720, avg=218.05, stdev=180.96, samples=20 00:26:50.118 lat (msec) : 4=0.09%, 10=0.94%, 50=2.41%, 100=17.42%, 250=52.74% 00:26:50.118 lat (msec) : 500=9.67%, 750=6.50%, 1000=6.06%, 2000=4.19% 00:26:50.118 cpu : usr=0.15%, sys=0.77%, ctx=341, majf=0, minf=4097 00:26:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.118 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.118 job4: (groupid=0, jobs=1): err= 0: pid=3530059: Sun Nov 10 00:00:14 2024 00:26:50.118 read: IOPS=562, BW=141MiB/s (147MB/s)(1421MiB/10102msec) 00:26:50.118 slat (usec): min=13, max=175595, avg=1755.94, stdev=7314.63 00:26:50.118 clat (msec): min=27, max=647, avg=111.91, stdev=96.38 00:26:50.118 lat (msec): min=27, max=647, avg=113.67, stdev=97.87 00:26:50.118 clat percentiles (msec): 00:26:50.118 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 55], 00:26:50.118 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 65], 00:26:50.118 | 70.00th=[ 133], 80.00th=[ 180], 90.00th=[ 218], 95.00th=[ 321], 00:26:50.118 | 99.00th=[ 506], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 609], 00:26:50.118 | 99.99th=[ 651] 00:26:50.118 bw ( KiB/s): min=31232, max=289213, per=20.14%, avg=143843.05, stdev=96840.09, samples=20 00:26:50.118 iops : min= 122, max= 1129, avg=561.85, stdev=378.22, samples=20 00:26:50.118 lat (msec) : 50=9.33%, 100=58.54%, 250=23.51%, 500=7.62%, 750=1.00% 00:26:50.118 cpu : usr=0.35%, sys=1.86%, ctx=644, majf=0, minf=4097 00:26:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.118 issued rwts: total=5683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.118 job5: (groupid=0, jobs=1): err= 0: pid=3530060: Sun Nov 10 00:00:14 2024 00:26:50.118 read: IOPS=117, BW=29.4MiB/s (30.8MB/s)(299MiB/10170msec) 00:26:50.118 slat (usec): min=9, max=306022, avg=7246.05, stdev=28290.88 00:26:50.118 clat (msec): min=111, max=1202, avg=536.99, stdev=207.72 00:26:50.118 lat (msec): min=111, max=1202, avg=544.23, stdev=211.56 00:26:50.118 clat percentiles (msec): 00:26:50.118 | 1.00th=[ 150], 5.00th=[ 215], 10.00th=[ 284], 20.00th=[ 330], 00:26:50.118 | 30.00th=[ 393], 40.00th=[ 456], 50.00th=[ 542], 60.00th=[ 617], 00:26:50.118 | 70.00th=[ 659], 80.00th=[ 709], 90.00th=[ 818], 95.00th=[ 885], 00:26:50.118 | 99.00th=[ 1053], 99.50th=[ 1116], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:50.118 | 99.99th=[ 1200] 00:26:50.118 bw ( KiB/s): min=14848, max=56832, per=4.05%, avg=28951.35, stdev=11793.66, samples=20 00:26:50.118 iops : min= 58, max= 222, avg=113.05, stdev=46.09, samples=20 00:26:50.118 lat (msec) : 250=5.61%, 500=41.00%, 750=37.15%, 1000=14.73%, 2000=1.51% 00:26:50.118 cpu : usr=0.04%, sys=0.46%, ctx=147, majf=0, minf=4097 00:26:50.118 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:26:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.118 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.118 issued rwts: total=1195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.118 job6: (groupid=0, jobs=1): err= 0: pid=3530067: Sun Nov 10 00:00:14 2024 00:26:50.118 read: IOPS=218, BW=54.5MiB/s (57.2MB/s)(551MiB/10104msec) 00:26:50.118 slat (usec): min=11, max=384371, avg=3903.31, stdev=17746.93 00:26:50.118 clat (msec): min=38, max=1230, avg=289.41, stdev=192.46 00:26:50.118 lat (msec): min=38, max=1230, avg=293.31, stdev=195.64 00:26:50.118 clat percentiles (msec): 00:26:50.118 | 1.00th=[ 53], 5.00th=[ 79], 10.00th=[ 114], 20.00th=[ 134], 00:26:50.118 | 30.00th=[ 153], 40.00th=[ 207], 50.00th=[ 259], 60.00th=[ 309], 00:26:50.118 | 70.00th=[ 351], 80.00th=[ 401], 90.00th=[ 481], 95.00th=[ 617], 00:26:50.118 | 99.00th=[ 1070], 99.50th=[ 1133], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:50.118 | 99.99th=[ 1234] 00:26:50.118 bw ( KiB/s): min=13824, max=129536, per=7.67%, avg=54753.00, stdev=31935.92, samples=20 00:26:50.118 iops : min= 54, max= 506, avg=213.85, stdev=124.75, samples=20 00:26:50.118 lat (msec) : 50=0.86%, 100=5.99%, 250=41.81%, 500=43.53%, 750=4.22% 00:26:50.118 lat (msec) : 1000=1.91%, 2000=1.68% 00:26:50.118 cpu : usr=0.12%, sys=0.82%, ctx=337, majf=0, minf=4097 00:26:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.118 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.118 job7: (groupid=0, jobs=1): err= 0: pid=3530075: Sun Nov 10 00:00:14 2024 00:26:50.118 read: IOPS=127, BW=32.0MiB/s (33.5MB/s)(325MiB/10172msec) 00:26:50.118 slat (usec): min=10, max=404580, avg=6746.25, stdev=29436.91 00:26:50.118 clat (msec): min=37, max=1221, avg=493.13, stdev=242.53 00:26:50.118 lat (msec): min=37, max=1221, avg=499.87, stdev=245.08 00:26:50.118 clat percentiles (msec): 00:26:50.118 | 1.00th=[ 41], 5.00th=[ 134], 10.00th=[ 205], 20.00th=[ 284], 00:26:50.118 | 30.00th=[ 321], 40.00th=[ 372], 50.00th=[ 502], 60.00th=[ 567], 00:26:50.118 | 70.00th=[ 634], 80.00th=[ 709], 90.00th=[ 835], 95.00th=[ 927], 00:26:50.118 | 99.00th=[ 1053], 99.50th=[ 1053], 99.90th=[ 1217], 99.95th=[ 1217], 00:26:50.118 | 99.99th=[ 1217] 00:26:50.118 bw ( KiB/s): min= 7168, max=75264, per=4.43%, avg=31665.00, stdev=16295.25, samples=20 00:26:50.118 iops : min= 28, max= 294, avg=123.65, stdev=63.68, samples=20 00:26:50.118 lat (msec) : 50=1.46%, 100=1.23%, 250=12.84%, 500=34.20%, 750=34.82% 00:26:50.118 lat (msec) : 1000=13.30%, 2000=2.15% 00:26:50.118 cpu : usr=0.05%, sys=0.55%, ctx=196, majf=0, minf=4097 00:26:50.118 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:26:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.118 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.118 issued rwts: total=1301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.118 job8: (groupid=0, jobs=1): err= 0: pid=3530102: Sun Nov 10 00:00:14 2024 00:26:50.118 read: IOPS=150, BW=37.7MiB/s (39.5MB/s)(383MiB/10174msec) 00:26:50.118 slat (usec): min=9, max=372974, avg=5162.78, stdev=24991.94 00:26:50.118 clat (msec): min=18, max=1068, avg=419.27, stdev=233.39 00:26:50.118 lat (msec): min=19, max=1149, avg=424.43, stdev=237.98 00:26:50.118 clat percentiles (msec): 00:26:50.118 | 1.00th=[ 34], 5.00th=[ 84], 10.00th=[ 140], 20.00th=[ 245], 00:26:50.118 | 30.00th=[ 292], 40.00th=[ 317], 50.00th=[ 347], 60.00th=[ 393], 00:26:50.118 | 70.00th=[ 558], 80.00th=[ 634], 90.00th=[ 776], 95.00th=[ 835], 00:26:50.118 | 99.00th=[ 978], 99.50th=[ 1020], 99.90th=[ 1070], 99.95th=[ 1070], 00:26:50.118 | 99.99th=[ 1070] 00:26:50.118 bw ( KiB/s): min=11776, max=107008, per=5.27%, avg=37604.50, stdev=22787.52, samples=20 00:26:50.118 iops : min= 46, max= 418, avg=146.85, stdev=89.05, samples=20 00:26:50.118 lat (msec) : 20=0.26%, 50=1.63%, 100=5.81%, 250=13.76%, 500=45.01% 00:26:50.118 lat (msec) : 750=22.57%, 1000=10.24%, 2000=0.72% 00:26:50.118 cpu : usr=0.10%, sys=0.53%, ctx=222, majf=0, minf=3721 00:26:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.118 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.118 issued rwts: total=1533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.118 job9: (groupid=0, jobs=1): err= 0: pid=3530126: Sun Nov 10 00:00:14 2024 00:26:50.118 read: IOPS=406, BW=102MiB/s (107MB/s)(1027MiB/10100msec) 00:26:50.118 slat (usec): min=10, max=325063, avg=1372.18, stdev=9541.91 00:26:50.118 clat (usec): min=1055, max=1182.6k, avg=155823.99, stdev=188196.85 00:26:50.118 lat (usec): min=1086, max=1182.7k, avg=157196.17, stdev=188794.53 00:26:50.118 clat percentiles (usec): 00:26:50.118 | 1.00th=[ 1500], 5.00th=[ 4047], 10.00th=[ 39584], 00:26:50.118 | 20.00th=[ 66323], 30.00th=[ 71828], 40.00th=[ 90702], 00:26:50.118 | 50.00th=[ 101188], 60.00th=[ 111674], 70.00th=[ 125305], 00:26:50.118 | 80.00th=[ 152044], 90.00th=[ 396362], 95.00th=[ 692061], 00:26:50.118 | 99.00th=[ 843056], 99.50th=[ 918553], 99.90th=[1098908], 00:26:50.118 | 99.95th=[1098908], 99.99th=[1182794] 00:26:50.118 bw ( KiB/s): min=12288, max=222208, per=14.50%, avg=103556.15, stdev=70397.11, samples=20 00:26:50.118 iops : min= 48, max= 868, avg=404.50, stdev=274.97, samples=20 00:26:50.118 lat (msec) : 2=4.02%, 4=0.95%, 10=0.39%, 20=0.34%, 50=9.05% 00:26:50.118 lat (msec) : 100=33.61%, 250=41.08%, 500=1.95%, 750=4.75%, 1000=3.38% 00:26:50.118 lat (msec) : 2000=0.49% 00:26:50.118 cpu : usr=0.37%, sys=1.52%, ctx=1107, majf=0, minf=4098 00:26:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:50.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.119 issued rwts: total=4109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.119 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.119 job10: (groupid=0, jobs=1): err= 0: pid=3530144: Sun Nov 10 00:00:14 2024 00:26:50.119 read: IOPS=114, BW=28.6MiB/s (30.0MB/s)(291MiB/10169msec) 00:26:50.119 slat (usec): min=9, max=512746, avg=5503.70, stdev=30751.02 00:26:50.119 clat (msec): min=52, max=1133, avg=554.12, stdev=230.56 00:26:50.119 lat (msec): min=52, max=1133, avg=559.63, stdev=234.38 00:26:50.119 clat percentiles (msec): 00:26:50.119 | 1.00th=[ 67], 5.00th=[ 150], 10.00th=[ 194], 20.00th=[ 380], 00:26:50.119 | 30.00th=[ 439], 40.00th=[ 510], 50.00th=[ 558], 60.00th=[ 634], 00:26:50.119 | 70.00th=[ 701], 80.00th=[ 776], 90.00th=[ 827], 95.00th=[ 885], 00:26:50.119 | 99.00th=[ 1020], 99.50th=[ 1053], 99.90th=[ 1083], 99.95th=[ 1133], 00:26:50.119 | 99.99th=[ 1133] 00:26:50.119 bw ( KiB/s): min= 2048, max=47616, per=3.94%, avg=28107.10, stdev=12113.12, samples=20 00:26:50.119 iops : min= 8, max= 186, avg=109.75, stdev=47.36, samples=20 00:26:50.119 lat (msec) : 100=3.36%, 250=11.53%, 500=22.38%, 750=38.64%, 1000=22.38% 00:26:50.119 lat (msec) : 2000=1.72% 00:26:50.119 cpu : usr=0.02%, sys=0.38%, ctx=184, majf=0, minf=4097 00:26:50.119 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:26:50.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.119 issued rwts: total=1162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.119 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.119 00:26:50.119 Run status group 0 (all jobs): 00:26:50.119 READ: bw=697MiB/s (731MB/s), 28.6MiB/s-141MiB/s (30.0MB/s-147MB/s), io=7096MiB (7441MB), run=10066-10176msec 00:26:50.119 00:26:50.119 Disk stats (read/write): 00:26:50.119 nvme0n1: ios=8500/0, merge=0/0, ticks=1238002/0, in_queue=1238002, util=97.14% 00:26:50.119 nvme10n1: ios=4322/0, merge=0/0, ticks=1238204/0, in_queue=1238204, util=97.37% 00:26:50.119 nvme1n1: ios=4547/0, merge=0/0, ticks=1242675/0, in_queue=1242675, util=97.63% 00:26:50.119 nvme2n1: ios=4362/0, merge=0/0, ticks=1193522/0, in_queue=1193522, util=97.77% 00:26:50.119 nvme3n1: ios=11177/0, merge=0/0, ticks=1232217/0, in_queue=1232217, util=97.83% 00:26:50.119 nvme4n1: ios=2263/0, merge=0/0, ticks=1183129/0, in_queue=1183129, util=98.16% 00:26:50.119 nvme5n1: ios=4196/0, merge=0/0, ticks=1239443/0, in_queue=1239443, util=98.34% 00:26:50.119 nvme6n1: ios=2474/0, merge=0/0, ticks=1173287/0, in_queue=1173287, util=98.45% 00:26:50.119 nvme7n1: ios=2939/0, merge=0/0, ticks=1195977/0, in_queue=1195977, util=98.91% 00:26:50.119 nvme8n1: ios=8025/0, merge=0/0, ticks=1202373/0, in_queue=1202373, util=99.11% 00:26:50.119 nvme9n1: ios=2196/0, merge=0/0, ticks=1204159/0, in_queue=1204159, util=99.25% 00:26:50.119 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:50.119 [global] 00:26:50.119 thread=1 00:26:50.119 invalidate=1 00:26:50.119 rw=randwrite 00:26:50.119 time_based=1 00:26:50.119 runtime=10 00:26:50.119 ioengine=libaio 00:26:50.119 direct=1 00:26:50.119 bs=262144 00:26:50.119 iodepth=64 00:26:50.119 norandommap=1 00:26:50.119 numjobs=1 00:26:50.119 00:26:50.119 [job0] 00:26:50.119 filename=/dev/nvme0n1 00:26:50.119 [job1] 00:26:50.119 filename=/dev/nvme10n1 00:26:50.119 [job2] 00:26:50.119 filename=/dev/nvme1n1 00:26:50.119 [job3] 00:26:50.119 filename=/dev/nvme2n1 00:26:50.119 [job4] 00:26:50.119 filename=/dev/nvme3n1 00:26:50.119 [job5] 00:26:50.119 filename=/dev/nvme4n1 00:26:50.119 [job6] 00:26:50.119 filename=/dev/nvme5n1 00:26:50.119 [job7] 00:26:50.119 filename=/dev/nvme6n1 00:26:50.119 [job8] 00:26:50.119 filename=/dev/nvme7n1 00:26:50.119 [job9] 00:26:50.119 filename=/dev/nvme8n1 00:26:50.119 [job10] 00:26:50.119 filename=/dev/nvme9n1 00:26:50.119 Could not set queue depth (nvme0n1) 00:26:50.119 Could not set queue depth (nvme10n1) 00:26:50.119 Could not set queue depth (nvme1n1) 00:26:50.119 Could not set queue depth (nvme2n1) 00:26:50.119 Could not set queue depth (nvme3n1) 00:26:50.119 Could not set queue depth (nvme4n1) 00:26:50.119 Could not set queue depth (nvme5n1) 00:26:50.119 Could not set queue depth (nvme6n1) 00:26:50.119 Could not set queue depth (nvme7n1) 00:26:50.119 Could not set queue depth (nvme8n1) 00:26:50.119 Could not set queue depth (nvme9n1) 00:26:50.119 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:50.119 fio-3.35 00:26:50.119 Starting 11 threads 00:27:00.091 00:27:00.091 job0: (groupid=0, jobs=1): err= 0: pid=3531448: Sun Nov 10 00:00:25 2024 00:27:00.091 write: IOPS=282, BW=70.6MiB/s (74.1MB/s)(719MiB/10172msec); 0 zone resets 00:27:00.091 slat (usec): min=22, max=143818, avg=2821.38, stdev=8917.76 00:27:00.091 clat (usec): min=1277, max=661907, avg=223463.80, stdev=145829.85 00:27:00.091 lat (usec): min=1317, max=661950, avg=226285.18, stdev=147949.67 00:27:00.091 clat percentiles (msec): 00:27:00.091 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 37], 20.00th=[ 86], 00:27:00.091 | 30.00th=[ 148], 40.00th=[ 184], 50.00th=[ 218], 60.00th=[ 243], 00:27:00.091 | 70.00th=[ 259], 80.00th=[ 351], 90.00th=[ 414], 95.00th=[ 535], 00:27:00.091 | 99.00th=[ 625], 99.50th=[ 651], 99.90th=[ 659], 99.95th=[ 659], 00:27:00.091 | 99.99th=[ 659] 00:27:00.091 bw ( KiB/s): min=22528, max=163328, per=9.36%, avg=71923.35, stdev=37003.59, samples=20 00:27:00.091 iops : min= 88, max= 638, avg=280.90, stdev=144.56, samples=20 00:27:00.091 lat (msec) : 2=0.21%, 4=0.84%, 10=2.99%, 20=1.91%, 50=7.31% 00:27:00.091 lat (msec) : 100=8.94%, 250=44.26%, 500=27.91%, 750=5.64% 00:27:00.091 cpu : usr=0.94%, sys=0.88%, ctx=1469, majf=0, minf=1 00:27:00.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:00.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.091 issued rwts: total=0,2874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.091 job1: (groupid=0, jobs=1): err= 0: pid=3531460: Sun Nov 10 00:00:25 2024 00:27:00.091 write: IOPS=296, BW=74.2MiB/s (77.9MB/s)(753MiB/10142msec); 0 zone resets 00:27:00.091 slat (usec): min=19, max=139189, avg=1558.63, stdev=6211.88 00:27:00.091 clat (usec): min=1245, max=802933, avg=213749.29, stdev=167765.72 00:27:00.091 lat (usec): min=1278, max=820150, avg=215307.92, stdev=168791.84 00:27:00.091 clat percentiles (msec): 00:27:00.091 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 34], 20.00th=[ 74], 00:27:00.091 | 30.00th=[ 117], 40.00th=[ 127], 50.00th=[ 161], 60.00th=[ 211], 00:27:00.091 | 70.00th=[ 262], 80.00th=[ 359], 90.00th=[ 481], 95.00th=[ 542], 00:27:00.091 | 99.00th=[ 718], 99.50th=[ 751], 99.90th=[ 785], 99.95th=[ 793], 00:27:00.091 | 99.99th=[ 802] 00:27:00.091 bw ( KiB/s): min=28672, max=117248, per=9.82%, avg=75482.30, stdev=26829.09, samples=20 00:27:00.091 iops : min= 112, max= 458, avg=294.80, stdev=104.83, samples=20 00:27:00.091 lat (msec) : 2=0.73%, 4=1.36%, 10=3.85%, 20=2.46%, 50=5.08% 00:27:00.091 lat (msec) : 100=12.32%, 250=42.30%, 500=23.77%, 750=7.67%, 1000=0.46% 00:27:00.091 cpu : usr=0.91%, sys=0.97%, ctx=2047, majf=0, minf=1 00:27:00.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:00.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.091 issued rwts: total=0,3012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.091 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.091 job2: (groupid=0, jobs=1): err= 0: pid=3531461: Sun Nov 10 00:00:25 2024 00:27:00.091 write: IOPS=227, BW=56.8MiB/s (59.6MB/s)(578MiB/10170msec); 0 zone resets 00:27:00.091 slat (usec): min=18, max=104884, avg=3113.22, stdev=8904.78 00:27:00.091 clat (usec): min=1326, max=747659, avg=278287.46, stdev=182386.78 00:27:00.091 lat (usec): min=1384, max=747692, avg=281400.69, stdev=184753.36 00:27:00.091 clat percentiles (msec): 00:27:00.091 | 1.00th=[ 5], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 87], 00:27:00.091 | 30.00th=[ 165], 40.00th=[ 224], 50.00th=[ 257], 60.00th=[ 313], 00:27:00.091 | 70.00th=[ 376], 80.00th=[ 443], 90.00th=[ 542], 95.00th=[ 600], 00:27:00.091 | 99.00th=[ 718], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 751], 00:27:00.091 | 99.99th=[ 751] 00:27:00.091 bw ( KiB/s): min=22528, max=162816, per=7.49%, avg=57566.90, stdev=34204.79, samples=20 00:27:00.091 iops : min= 88, max= 636, avg=224.80, stdev=133.62, samples=20 00:27:00.091 lat (msec) : 2=0.22%, 4=0.65%, 10=2.68%, 20=0.56%, 50=9.69% 00:27:00.091 lat (msec) : 100=8.30%, 250=26.43%, 500=37.28%, 750=14.19% 00:27:00.091 cpu : usr=0.72%, sys=0.72%, ctx=1266, majf=0, minf=2 00:27:00.091 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,2312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job3: (groupid=0, jobs=1): err= 0: pid=3531462: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=201, BW=50.5MiB/s (52.9MB/s)(515MiB/10201msec); 0 zone resets 00:27:00.092 slat (usec): min=17, max=323810, avg=3549.66, stdev=11943.71 00:27:00.092 clat (usec): min=1331, max=832920, avg=313219.38, stdev=194893.33 00:27:00.092 lat (usec): min=1356, max=832963, avg=316769.04, stdev=197077.58 00:27:00.092 clat percentiles (msec): 00:27:00.092 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 27], 20.00th=[ 72], 00:27:00.092 | 30.00th=[ 174], 40.00th=[ 300], 50.00th=[ 351], 60.00th=[ 397], 00:27:00.092 | 70.00th=[ 451], 80.00th=[ 485], 90.00th=[ 535], 95.00th=[ 575], 00:27:00.092 | 99.00th=[ 751], 99.50th=[ 793], 99.90th=[ 827], 99.95th=[ 827], 00:27:00.092 | 99.99th=[ 835] 00:27:00.092 bw ( KiB/s): min=28672, max=188550, per=6.65%, avg=51104.70, stdev=34619.37, samples=20 00:27:00.092 iops : min= 112, max= 736, avg=199.55, stdev=135.13, samples=20 00:27:00.092 lat (msec) : 2=0.58%, 4=0.97%, 10=3.40%, 20=3.01%, 50=8.69% 00:27:00.092 lat (msec) : 100=5.87%, 250=12.52%, 500=48.79%, 750=15.19%, 1000=0.97% 00:27:00.092 cpu : usr=0.44%, sys=0.73%, ctx=1220, majf=0, minf=1 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,2060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job4: (groupid=0, jobs=1): err= 0: pid=3531463: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=214, BW=53.7MiB/s (56.3MB/s)(548MiB/10216msec); 0 zone resets 00:27:00.092 slat (usec): min=17, max=101674, avg=3298.44, stdev=9129.65 00:27:00.092 clat (usec): min=959, max=813480, avg=294561.23, stdev=180230.52 00:27:00.092 lat (usec): min=992, max=813524, avg=297859.67, stdev=181778.60 00:27:00.092 clat percentiles (usec): 00:27:00.092 | 1.00th=[ 1778], 5.00th=[ 9896], 10.00th=[ 94897], 20.00th=[122160], 00:27:00.092 | 30.00th=[170918], 40.00th=[217056], 50.00th=[291505], 60.00th=[337642], 00:27:00.092 | 70.00th=[375391], 80.00th=[450888], 90.00th=[534774], 95.00th=[624952], 00:27:00.092 | 99.00th=[767558], 99.50th=[792724], 99.90th=[809501], 99.95th=[809501], 00:27:00.092 | 99.99th=[809501] 00:27:00.092 bw ( KiB/s): min=24064, max=118784, per=7.09%, avg=54496.40, stdev=27363.39, samples=20 00:27:00.092 iops : min= 94, max= 464, avg=212.80, stdev=106.90, samples=20 00:27:00.092 lat (usec) : 1000=0.14% 00:27:00.092 lat (msec) : 2=1.19%, 4=0.82%, 10=2.92%, 20=1.41%, 50=0.59% 00:27:00.092 lat (msec) : 100=6.29%, 250=30.96%, 500=42.09%, 750=12.27%, 1000=1.32% 00:27:00.092 cpu : usr=0.63%, sys=0.79%, ctx=1118, majf=0, minf=1 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,2193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job5: (groupid=0, jobs=1): err= 0: pid=3531464: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=292, BW=73.0MiB/s (76.6MB/s)(746MiB/10216msec); 0 zone resets 00:27:00.092 slat (usec): min=17, max=221669, avg=2398.23, stdev=8488.83 00:27:00.092 clat (msec): min=2, max=792, avg=215.90, stdev=177.95 00:27:00.092 lat (msec): min=2, max=792, avg=218.30, stdev=180.01 00:27:00.092 clat percentiles (msec): 00:27:00.092 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 34], 20.00th=[ 57], 00:27:00.092 | 30.00th=[ 78], 40.00th=[ 101], 50.00th=[ 159], 60.00th=[ 228], 00:27:00.092 | 70.00th=[ 317], 80.00th=[ 388], 90.00th=[ 472], 95.00th=[ 527], 00:27:00.092 | 99.00th=[ 718], 99.50th=[ 751], 99.90th=[ 793], 99.95th=[ 793], 00:27:00.092 | 99.99th=[ 793] 00:27:00.092 bw ( KiB/s): min=28672, max=238592, per=9.73%, avg=74768.85, stdev=57192.77, samples=20 00:27:00.092 iops : min= 112, max= 932, avg=292.00, stdev=223.43, samples=20 00:27:00.092 lat (msec) : 4=0.13%, 10=1.98%, 20=3.22%, 50=11.42%, 100=23.18% 00:27:00.092 lat (msec) : 250=22.01%, 500=30.69%, 750=6.73%, 1000=0.64% 00:27:00.092 cpu : usr=0.85%, sys=1.07%, ctx=1822, majf=0, minf=1 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,2985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job6: (groupid=0, jobs=1): err= 0: pid=3531465: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=322, BW=80.7MiB/s (84.6MB/s)(816MiB/10108msec); 0 zone resets 00:27:00.092 slat (usec): min=16, max=218310, avg=1780.89, stdev=8530.54 00:27:00.092 clat (usec): min=905, max=860292, avg=196382.05, stdev=181950.97 00:27:00.092 lat (usec): min=937, max=867632, avg=198162.95, stdev=184096.69 00:27:00.092 clat percentiles (msec): 00:27:00.092 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 26], 20.00th=[ 48], 00:27:00.092 | 30.00th=[ 57], 40.00th=[ 77], 50.00th=[ 110], 60.00th=[ 178], 00:27:00.092 | 70.00th=[ 300], 80.00th=[ 372], 90.00th=[ 481], 95.00th=[ 535], 00:27:00.092 | 99.00th=[ 701], 99.50th=[ 743], 99.90th=[ 835], 99.95th=[ 852], 00:27:00.092 | 99.99th=[ 860] 00:27:00.092 bw ( KiB/s): min=16384, max=251912, per=10.65%, avg=81880.70, stdev=59630.14, samples=20 00:27:00.092 iops : min= 64, max= 984, avg=319.80, stdev=232.88, samples=20 00:27:00.092 lat (usec) : 1000=0.06% 00:27:00.092 lat (msec) : 2=0.52%, 4=1.13%, 10=1.69%, 20=4.41%, 50=13.61% 00:27:00.092 lat (msec) : 100=27.06%, 250=16.37%, 500=27.46%, 750=7.23%, 1000=0.46% 00:27:00.092 cpu : usr=0.87%, sys=1.15%, ctx=2452, majf=0, minf=2 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,3263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job7: (groupid=0, jobs=1): err= 0: pid=3531466: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=281, BW=70.3MiB/s (73.7MB/s)(718MiB/10209msec); 0 zone resets 00:27:00.092 slat (usec): min=21, max=80200, avg=2875.82, stdev=8091.97 00:27:00.092 clat (usec): min=1253, max=747868, avg=224479.81, stdev=177859.90 00:27:00.092 lat (usec): min=1282, max=747904, avg=227355.63, stdev=180219.13 00:27:00.092 clat percentiles (msec): 00:27:00.092 | 1.00th=[ 12], 5.00th=[ 56], 10.00th=[ 62], 20.00th=[ 75], 00:27:00.092 | 30.00th=[ 93], 40.00th=[ 120], 50.00th=[ 132], 60.00th=[ 203], 00:27:00.092 | 70.00th=[ 321], 80.00th=[ 393], 90.00th=[ 506], 95.00th=[ 575], 00:27:00.092 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 751], 00:27:00.092 | 99.99th=[ 751] 00:27:00.092 bw ( KiB/s): min=22528, max=187528, per=9.35%, avg=71888.70, stdev=54061.33, samples=20 00:27:00.092 iops : min= 88, max= 732, avg=280.75, stdev=211.15, samples=20 00:27:00.092 lat (msec) : 2=0.17%, 4=0.28%, 10=0.49%, 20=0.52%, 50=2.19% 00:27:00.092 lat (msec) : 100=29.42%, 250=31.89%, 500=24.37%, 750=10.65% 00:27:00.092 cpu : usr=0.78%, sys=1.00%, ctx=1077, majf=0, minf=1 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,2872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job8: (groupid=0, jobs=1): err= 0: pid=3531467: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=257, BW=64.4MiB/s (67.5MB/s)(653MiB/10143msec); 0 zone resets 00:27:00.092 slat (usec): min=25, max=281714, avg=3393.21, stdev=9686.60 00:27:00.092 clat (msec): min=13, max=687, avg=244.76, stdev=144.84 00:27:00.092 lat (msec): min=13, max=687, avg=248.16, stdev=146.26 00:27:00.092 clat percentiles (msec): 00:27:00.092 | 1.00th=[ 57], 5.00th=[ 64], 10.00th=[ 118], 20.00th=[ 127], 00:27:00.092 | 30.00th=[ 153], 40.00th=[ 167], 50.00th=[ 192], 60.00th=[ 234], 00:27:00.092 | 70.00th=[ 284], 80.00th=[ 359], 90.00th=[ 489], 95.00th=[ 542], 00:27:00.092 | 99.00th=[ 651], 99.50th=[ 676], 99.90th=[ 684], 99.95th=[ 684], 00:27:00.092 | 99.99th=[ 684] 00:27:00.092 bw ( KiB/s): min=26624, max=142336, per=8.49%, avg=65260.20, stdev=34224.19, samples=20 00:27:00.092 iops : min= 104, max= 556, avg=254.85, stdev=133.71, samples=20 00:27:00.092 lat (msec) : 20=0.08%, 50=0.34%, 100=7.31%, 250=55.87%, 500=27.90% 00:27:00.092 lat (msec) : 750=8.50% 00:27:00.092 cpu : usr=0.72%, sys=0.81%, ctx=751, majf=0, minf=1 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,2613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.092 job9: (groupid=0, jobs=1): err= 0: pid=3531468: Sun Nov 10 00:00:25 2024 00:27:00.092 write: IOPS=367, BW=92.0MiB/s (96.4MB/s)(926MiB/10070msec); 0 zone resets 00:27:00.092 slat (usec): min=19, max=185325, avg=1539.66, stdev=6870.32 00:27:00.092 clat (usec): min=1360, max=730531, avg=172348.17, stdev=157899.22 00:27:00.092 lat (usec): min=1402, max=743426, avg=173887.82, stdev=159128.98 00:27:00.092 clat percentiles (msec): 00:27:00.092 | 1.00th=[ 10], 5.00th=[ 38], 10.00th=[ 57], 20.00th=[ 64], 00:27:00.092 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 81], 60.00th=[ 155], 00:27:00.092 | 70.00th=[ 209], 80.00th=[ 292], 90.00th=[ 439], 95.00th=[ 531], 00:27:00.092 | 99.00th=[ 642], 99.50th=[ 676], 99.90th=[ 718], 99.95th=[ 726], 00:27:00.092 | 99.99th=[ 735] 00:27:00.092 bw ( KiB/s): min=33792, max=240128, per=12.12%, avg=93180.40, stdev=65287.00, samples=20 00:27:00.092 iops : min= 132, max= 938, avg=363.95, stdev=255.01, samples=20 00:27:00.092 lat (msec) : 2=0.19%, 4=0.32%, 10=0.51%, 20=1.40%, 50=5.29% 00:27:00.092 lat (msec) : 100=46.30%, 250=22.22%, 500=17.98%, 750=5.78% 00:27:00.092 cpu : usr=1.21%, sys=1.13%, ctx=2128, majf=0, minf=1 00:27:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.092 issued rwts: total=0,3704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.093 job10: (groupid=0, jobs=1): err= 0: pid=3531469: Sun Nov 10 00:00:25 2024 00:27:00.093 write: IOPS=274, BW=68.6MiB/s (71.9MB/s)(698MiB/10169msec); 0 zone resets 00:27:00.093 slat (usec): min=23, max=103667, avg=2868.40, stdev=7315.89 00:27:00.093 clat (msec): min=10, max=633, avg=229.73, stdev=134.74 00:27:00.093 lat (msec): min=12, max=643, avg=232.60, stdev=136.53 00:27:00.093 clat percentiles (msec): 00:27:00.093 | 1.00th=[ 27], 5.00th=[ 49], 10.00th=[ 100], 20.00th=[ 131], 00:27:00.093 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 203], 60.00th=[ 218], 00:27:00.093 | 70.00th=[ 243], 80.00th=[ 321], 90.00th=[ 447], 95.00th=[ 535], 00:27:00.093 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 634], 99.95th=[ 634], 00:27:00.093 | 99.99th=[ 634] 00:27:00.093 bw ( KiB/s): min=26624, max=142848, per=9.08%, avg=69816.35, stdev=31949.58, samples=20 00:27:00.093 iops : min= 104, max= 558, avg=272.65, stdev=124.83, samples=20 00:27:00.093 lat (msec) : 20=0.21%, 50=5.27%, 100=4.62%, 250=61.63%, 500=20.46% 00:27:00.093 lat (msec) : 750=7.81% 00:27:00.093 cpu : usr=0.82%, sys=1.02%, ctx=1174, majf=0, minf=1 00:27:00.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:27:00.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:00.093 issued rwts: total=0,2791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:00.093 00:27:00.093 Run status group 0 (all jobs): 00:27:00.093 WRITE: bw=751MiB/s (787MB/s), 50.5MiB/s-92.0MiB/s (52.9MB/s-96.4MB/s), io=7670MiB (8042MB), run=10070-10216msec 00:27:00.093 00:27:00.093 Disk stats (read/write): 00:27:00.093 nvme0n1: ios=49/5586, merge=0/0, ticks=3702/1190414, in_queue=1194116, util=99.96% 00:27:00.093 nvme10n1: ios=36/5854, merge=0/0, ticks=2134/1224913, in_queue=1227047, util=100.00% 00:27:00.093 nvme1n1: ios=5/4463, merge=0/0, ticks=207/1211356, in_queue=1211563, util=97.97% 00:27:00.093 nvme2n1: ios=0/4101, merge=0/0, ticks=0/1245969, in_queue=1245969, util=97.80% 00:27:00.093 nvme3n1: ios=17/4350, merge=0/0, ticks=302/1242242, in_queue=1242544, util=100.00% 00:27:00.093 nvme4n1: ios=43/5937, merge=0/0, ticks=4606/1231150, in_queue=1235756, util=100.00% 00:27:00.093 nvme5n1: ios=0/6317, merge=0/0, ticks=0/1212400, in_queue=1212400, util=98.24% 00:27:00.093 nvme6n1: ios=0/5712, merge=0/0, ticks=0/1238980, in_queue=1238980, util=98.40% 00:27:00.093 nvme7n1: ios=50/5054, merge=0/0, ticks=2175/1191906, in_queue=1194081, util=100.00% 00:27:00.093 nvme8n1: ios=37/7162, merge=0/0, ticks=962/1223274, in_queue=1224236, util=100.00% 00:27:00.093 nvme9n1: ios=45/5423, merge=0/0, ticks=2804/1198191, in_queue=1200995, util=100.00% 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:00.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:00.093 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.093 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:00.352 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.352 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:00.917 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.917 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.918 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.918 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.918 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:01.176 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.176 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:01.434 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:01.434 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:01.434 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:01.434 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:01.434 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:27:01.434 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:01.434 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.692 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:01.950 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.950 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:02.209 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.209 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:02.468 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.468 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:02.727 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.727 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:02.990 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:02.990 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.991 rmmod nvme_tcp 00:27:02.991 rmmod nvme_fabrics 00:27:02.991 rmmod nvme_keyring 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3525559 ']' 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3525559 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 3525559 ']' 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 3525559 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3525559 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3525559' 00:27:02.991 killing process with pid 3525559 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 3525559 00:27:02.991 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 3525559 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.282 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.187 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.187 00:27:08.187 real 1m5.654s 00:27:08.188 user 3m50.551s 00:27:08.188 sys 0m16.190s 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.188 ************************************ 00:27:08.188 END TEST nvmf_multiconnection 00:27:08.188 ************************************ 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:08.188 ************************************ 00:27:08.188 START TEST nvmf_initiator_timeout 00:27:08.188 ************************************ 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:08.188 * Looking for test storage... 00:27:08.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.188 --rc genhtml_branch_coverage=1 00:27:08.188 --rc genhtml_function_coverage=1 00:27:08.188 --rc genhtml_legend=1 00:27:08.188 --rc geninfo_all_blocks=1 00:27:08.188 --rc geninfo_unexecuted_blocks=1 00:27:08.188 00:27:08.188 ' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.188 --rc genhtml_branch_coverage=1 00:27:08.188 --rc genhtml_function_coverage=1 00:27:08.188 --rc genhtml_legend=1 00:27:08.188 --rc geninfo_all_blocks=1 00:27:08.188 --rc geninfo_unexecuted_blocks=1 00:27:08.188 00:27:08.188 ' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.188 --rc genhtml_branch_coverage=1 00:27:08.188 --rc genhtml_function_coverage=1 00:27:08.188 --rc genhtml_legend=1 00:27:08.188 --rc geninfo_all_blocks=1 00:27:08.188 --rc geninfo_unexecuted_blocks=1 00:27:08.188 00:27:08.188 ' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.188 --rc genhtml_branch_coverage=1 00:27:08.188 --rc genhtml_function_coverage=1 00:27:08.188 --rc genhtml_legend=1 00:27:08.188 --rc geninfo_all_blocks=1 00:27:08.188 --rc geninfo_unexecuted_blocks=1 00:27:08.188 00:27:08.188 ' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.188 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.189 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.090 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.091 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:27:10.349 00:27:10.349 --- 10.0.0.2 ping statistics --- 00:27:10.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.349 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:27:10.349 00:27:10.349 --- 10.0.0.1 ping statistics --- 00:27:10.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.349 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3535044 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3535044 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 3535044 ']' 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:10.349 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.349 [2024-11-10 00:00:36.459278] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:27:10.349 [2024-11-10 00:00:36.459413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.608 [2024-11-10 00:00:36.607639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.608 [2024-11-10 00:00:36.751431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.608 [2024-11-10 00:00:36.751512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.608 [2024-11-10 00:00:36.751537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.608 [2024-11-10 00:00:36.751560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.608 [2024-11-10 00:00:36.751596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.608 [2024-11-10 00:00:36.754459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.608 [2024-11-10 00:00:36.754531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.608 [2024-11-10 00:00:36.754632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.608 [2024-11-10 00:00:36.754652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.541 Malloc0 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.541 Delay0 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.541 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.541 [2024-11-10 00:00:37.540211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.542 [2024-11-10 00:00:37.569946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.542 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:12.105 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:12.105 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:27:12.105 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:12.105 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:12.105 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:27:14.004 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:14.005 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:14.005 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:27:14.005 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:14.005 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.005 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:27:14.262 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3535475 00:27:14.262 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:14.262 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:14.262 [global] 00:27:14.262 thread=1 00:27:14.262 invalidate=1 00:27:14.262 rw=write 00:27:14.262 time_based=1 00:27:14.262 runtime=60 00:27:14.262 ioengine=libaio 00:27:14.262 direct=1 00:27:14.262 bs=4096 00:27:14.262 iodepth=1 00:27:14.262 norandommap=0 00:27:14.262 numjobs=1 00:27:14.262 00:27:14.262 verify_dump=1 00:27:14.262 verify_backlog=512 00:27:14.262 verify_state_save=0 00:27:14.262 do_verify=1 00:27:14.262 verify=crc32c-intel 00:27:14.262 [job0] 00:27:14.262 filename=/dev/nvme0n1 00:27:14.262 Could not set queue depth (nvme0n1) 00:27:14.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:14.262 fio-3.35 00:27:14.262 Starting 1 thread 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.540 true 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.540 true 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.540 true 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.540 true 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.540 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.066 true 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.066 true 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.066 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.324 true 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.324 true 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:20.324 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3535475 00:28:16.535 00:28:16.535 job0: (groupid=0, jobs=1): err= 0: pid=3535550: Sun Nov 10 00:01:40 2024 00:28:16.535 read: IOPS=30, BW=122KiB/s (124kB/s)(7292KiB/60016msec) 00:28:16.535 slat (nsec): min=4730, max=73713, avg=22686.89, stdev=11069.95 00:28:16.535 clat (usec): min=250, max=41085k, avg=32545.19, stdev=962177.85 00:28:16.535 lat (usec): min=261, max=41085k, avg=32567.88, stdev=962178.17 00:28:16.535 clat percentiles (usec): 00:28:16.535 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 00:28:16.535 | 20.00th=[ 334], 30.00th=[ 363], 40.00th=[ 388], 00:28:16.535 | 50.00th=[ 424], 60.00th=[ 498], 70.00th=[ 553], 00:28:16.535 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:16.535 | 99.00th=[ 41681], 99.50th=[ 41681], 99.90th=[ 44827], 00:28:16.535 | 99.95th=[17112761], 99.99th=[17112761] 00:28:16.535 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60016msec); 0 zone resets 00:28:16.535 slat (usec): min=5, max=26720, avg=30.14, stdev=590.17 00:28:16.535 clat (usec): min=189, max=1000, avg=271.96, stdev=69.83 00:28:16.535 lat (usec): min=197, max=27132, avg=302.10, stdev=598.19 00:28:16.535 clat percentiles (usec): 00:28:16.535 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:28:16.535 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 253], 00:28:16.535 | 70.00th=[ 281], 80.00th=[ 330], 90.00th=[ 383], 95.00th=[ 416], 00:28:16.535 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 562], 99.95th=[ 644], 00:28:16.535 | 99.99th=[ 1004] 00:28:16.535 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=4 00:28:16.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=4 00:28:16.535 lat (usec) : 250=30.22%, 500=51.02%, 750=7.49%, 1000=0.05% 00:28:16.535 lat (msec) : 2=0.05%, 50=11.13%, >=2000=0.03% 00:28:16.535 cpu : usr=0.07%, sys=0.16%, ctx=3875, majf=0, minf=1 00:28:16.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.535 issued rwts: total=1823,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:16.535 00:28:16.535 Run status group 0 (all jobs): 00:28:16.535 READ: bw=122KiB/s (124kB/s), 122KiB/s-122KiB/s (124kB/s-124kB/s), io=7292KiB (7467kB), run=60016-60016msec 00:28:16.535 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60016-60016msec 00:28:16.535 00:28:16.535 Disk stats (read/write): 00:28:16.535 nvme0n1: ios=1871/2048, merge=0/0, ticks=19379/521, in_queue=19900, util=99.76% 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:16.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:16.535 nvmf hotplug test: fio successful as expected 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.535 rmmod nvme_tcp 00:28:16.535 rmmod nvme_fabrics 00:28:16.535 rmmod nvme_keyring 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3535044 ']' 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3535044 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 3535044 ']' 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 3535044 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3535044 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3535044' 00:28:16.535 killing process with pid 3535044 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 3535044 00:28:16.535 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 3535044 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.535 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.910 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.910 00:28:17.910 real 1m9.909s 00:28:17.910 user 4m15.908s 00:28:17.910 sys 0m6.717s 00:28:17.910 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:17.910 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:17.910 ************************************ 00:28:17.910 END TEST nvmf_initiator_timeout 00:28:17.910 ************************************ 00:28:18.169 00:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:18.169 00:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:18.169 00:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:18.169 00:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.169 00:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.072 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.073 ************************************ 00:28:20.073 START TEST nvmf_perf_adq 00:28:20.073 ************************************ 00:28:20.073 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.332 * Looking for test storage... 00:28:20.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:20.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.332 --rc genhtml_branch_coverage=1 00:28:20.332 --rc genhtml_function_coverage=1 00:28:20.332 --rc genhtml_legend=1 00:28:20.332 --rc geninfo_all_blocks=1 00:28:20.332 --rc geninfo_unexecuted_blocks=1 00:28:20.332 00:28:20.332 ' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:20.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.332 --rc genhtml_branch_coverage=1 00:28:20.332 --rc genhtml_function_coverage=1 00:28:20.332 --rc genhtml_legend=1 00:28:20.332 --rc geninfo_all_blocks=1 00:28:20.332 --rc geninfo_unexecuted_blocks=1 00:28:20.332 00:28:20.332 ' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:20.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.332 --rc genhtml_branch_coverage=1 00:28:20.332 --rc genhtml_function_coverage=1 00:28:20.332 --rc genhtml_legend=1 00:28:20.332 --rc geninfo_all_blocks=1 00:28:20.332 --rc geninfo_unexecuted_blocks=1 00:28:20.332 00:28:20.332 ' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:20.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.332 --rc genhtml_branch_coverage=1 00:28:20.332 --rc genhtml_function_coverage=1 00:28:20.332 --rc genhtml_legend=1 00:28:20.332 --rc geninfo_all_blocks=1 00:28:20.332 --rc geninfo_unexecuted_blocks=1 00:28:20.332 00:28:20.332 ' 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.332 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.333 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.305 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:22.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:22.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:22.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:22.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:22.306 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:23.240 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:25.767 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.041 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:28:31.042 00:28:31.042 --- 10.0.0.2 ping statistics --- 00:28:31.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.042 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:28:31.042 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:28:31.042 00:28:31.042 --- 10.0.0.1 ping statistics --- 00:28:31.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.042 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3547246 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3547246 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3547246 ']' 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:31.043 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.043 [2024-11-10 00:01:56.689512] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:28:31.043 [2024-11-10 00:01:56.689658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.043 [2024-11-10 00:01:56.846816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.043 [2024-11-10 00:01:56.994950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.043 [2024-11-10 00:01:56.995029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.043 [2024-11-10 00:01:56.995054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.043 [2024-11-10 00:01:56.995079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.043 [2024-11-10 00:01:56.995099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.043 [2024-11-10 00:01:56.997959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.043 [2024-11-10 00:01:56.998018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.043 [2024-11-10 00:01:56.998086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.043 [2024-11-10 00:01:56.998091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.607 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.172 [2024-11-10 00:01:58.072333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.172 Malloc1 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.172 [2024-11-10 00:01:58.186898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3547473 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:32.172 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:34.074 "tick_rate": 2700000000, 00:28:34.074 "poll_groups": [ 00:28:34.074 { 00:28:34.074 "name": "nvmf_tgt_poll_group_000", 00:28:34.074 "admin_qpairs": 1, 00:28:34.074 "io_qpairs": 1, 00:28:34.074 "current_admin_qpairs": 1, 00:28:34.074 "current_io_qpairs": 1, 00:28:34.074 "pending_bdev_io": 0, 00:28:34.074 "completed_nvme_io": 16581, 00:28:34.074 "transports": [ 00:28:34.074 { 00:28:34.074 "trtype": "TCP" 00:28:34.074 } 00:28:34.074 ] 00:28:34.074 }, 00:28:34.074 { 00:28:34.074 "name": "nvmf_tgt_poll_group_001", 00:28:34.074 "admin_qpairs": 0, 00:28:34.074 "io_qpairs": 1, 00:28:34.074 "current_admin_qpairs": 0, 00:28:34.074 "current_io_qpairs": 1, 00:28:34.074 "pending_bdev_io": 0, 00:28:34.074 "completed_nvme_io": 16210, 00:28:34.074 "transports": [ 00:28:34.074 { 00:28:34.074 "trtype": "TCP" 00:28:34.074 } 00:28:34.074 ] 00:28:34.074 }, 00:28:34.074 { 00:28:34.074 "name": "nvmf_tgt_poll_group_002", 00:28:34.074 "admin_qpairs": 0, 00:28:34.074 "io_qpairs": 1, 00:28:34.074 "current_admin_qpairs": 0, 00:28:34.074 "current_io_qpairs": 1, 00:28:34.074 "pending_bdev_io": 0, 00:28:34.074 "completed_nvme_io": 16531, 00:28:34.074 "transports": [ 00:28:34.074 { 00:28:34.074 "trtype": "TCP" 00:28:34.074 } 00:28:34.074 ] 00:28:34.074 }, 00:28:34.074 { 00:28:34.074 "name": "nvmf_tgt_poll_group_003", 00:28:34.074 "admin_qpairs": 0, 00:28:34.074 "io_qpairs": 1, 00:28:34.074 "current_admin_qpairs": 0, 00:28:34.074 "current_io_qpairs": 1, 00:28:34.074 "pending_bdev_io": 0, 00:28:34.074 "completed_nvme_io": 17046, 00:28:34.074 "transports": [ 00:28:34.074 { 00:28:34.074 "trtype": "TCP" 00:28:34.074 } 00:28:34.074 ] 00:28:34.074 } 00:28:34.074 ] 00:28:34.074 }' 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:34.074 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3547473 00:28:42.189 Initializing NVMe Controllers 00:28:42.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:42.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:42.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:42.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:42.189 Initialization complete. Launching workers. 00:28:42.189 ======================================================== 00:28:42.189 Latency(us) 00:28:42.189 Device Information : IOPS MiB/s Average min max 00:28:42.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8853.00 34.58 7228.73 3034.85 12097.92 00:28:42.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8665.50 33.85 7386.83 3930.54 10811.25 00:28:42.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8809.90 34.41 7265.04 3169.81 11832.51 00:28:42.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8938.60 34.92 7159.73 3231.70 12902.21 00:28:42.189 ======================================================== 00:28:42.189 Total : 35267.00 137.76 7259.16 3034.85 12902.21 00:28:42.189 00:28:42.189 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:42.189 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.189 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.448 rmmod nvme_tcp 00:28:42.448 rmmod nvme_fabrics 00:28:42.448 rmmod nvme_keyring 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3547246 ']' 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3547246 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3547246 ']' 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3547246 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3547246 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3547246' 00:28:42.448 killing process with pid 3547246 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3547246 00:28:42.448 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3547246 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.831 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.739 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.739 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:45.739 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:45.739 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:46.307 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:48.841 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.119 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.120 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:28:54.121 00:28:54.121 --- 10.0.0.2 ping statistics --- 00:28:54.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.121 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:54.121 00:28:54.121 --- 10.0.0.1 ping statistics --- 00:28:54.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.121 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:54.121 net.core.busy_poll = 1 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:54.121 net.core.busy_read = 1 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:54.121 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3550229 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3550229 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3550229 ']' 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.121 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.121 [2024-11-10 00:02:20.195461] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:28:54.121 [2024-11-10 00:02:20.195610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.380 [2024-11-10 00:02:20.347780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.380 [2024-11-10 00:02:20.490283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.380 [2024-11-10 00:02:20.490366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.380 [2024-11-10 00:02:20.490411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.380 [2024-11-10 00:02:20.490452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.380 [2024-11-10 00:02:20.490484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.380 [2024-11-10 00:02:20.493434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.380 [2024-11-10 00:02:20.493503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.380 [2024-11-10 00:02:20.493624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.380 [2024-11-10 00:02:20.493626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.947 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.947 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:28:54.947 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.947 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.947 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.207 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.465 [2024-11-10 00:02:21.555440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.465 Malloc1 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.465 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.723 [2024-11-10 00:02:21.668022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.723 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.723 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3550394 00:28:55.723 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:55.723 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:57.628 "tick_rate": 2700000000, 00:28:57.628 "poll_groups": [ 00:28:57.628 { 00:28:57.628 "name": "nvmf_tgt_poll_group_000", 00:28:57.628 "admin_qpairs": 1, 00:28:57.628 "io_qpairs": 2, 00:28:57.628 "current_admin_qpairs": 1, 00:28:57.628 "current_io_qpairs": 2, 00:28:57.628 "pending_bdev_io": 0, 00:28:57.628 "completed_nvme_io": 19556, 00:28:57.628 "transports": [ 00:28:57.628 { 00:28:57.628 "trtype": "TCP" 00:28:57.628 } 00:28:57.628 ] 00:28:57.628 }, 00:28:57.628 { 00:28:57.628 "name": "nvmf_tgt_poll_group_001", 00:28:57.628 "admin_qpairs": 0, 00:28:57.628 "io_qpairs": 2, 00:28:57.628 "current_admin_qpairs": 0, 00:28:57.628 "current_io_qpairs": 2, 00:28:57.628 "pending_bdev_io": 0, 00:28:57.628 "completed_nvme_io": 19055, 00:28:57.628 "transports": [ 00:28:57.628 { 00:28:57.628 "trtype": "TCP" 00:28:57.628 } 00:28:57.628 ] 00:28:57.628 }, 00:28:57.628 { 00:28:57.628 "name": "nvmf_tgt_poll_group_002", 00:28:57.628 "admin_qpairs": 0, 00:28:57.628 "io_qpairs": 0, 00:28:57.628 "current_admin_qpairs": 0, 00:28:57.628 "current_io_qpairs": 0, 00:28:57.628 "pending_bdev_io": 0, 00:28:57.628 "completed_nvme_io": 0, 00:28:57.628 "transports": [ 00:28:57.628 { 00:28:57.628 "trtype": "TCP" 00:28:57.628 } 00:28:57.628 ] 00:28:57.628 }, 00:28:57.628 { 00:28:57.628 "name": "nvmf_tgt_poll_group_003", 00:28:57.628 "admin_qpairs": 0, 00:28:57.628 "io_qpairs": 0, 00:28:57.628 "current_admin_qpairs": 0, 00:28:57.628 "current_io_qpairs": 0, 00:28:57.628 "pending_bdev_io": 0, 00:28:57.628 "completed_nvme_io": 0, 00:28:57.628 "transports": [ 00:28:57.628 { 00:28:57.628 "trtype": "TCP" 00:28:57.628 } 00:28:57.628 ] 00:28:57.628 } 00:28:57.628 ] 00:28:57.628 }' 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:57.628 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3550394 00:29:05.823 Initializing NVMe Controllers 00:29:05.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:05.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:05.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:05.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:05.823 Initialization complete. Launching workers. 00:29:05.823 ======================================================== 00:29:05.823 Latency(us) 00:29:05.823 Device Information : IOPS MiB/s Average min max 00:29:05.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5577.30 21.79 11486.78 2228.26 57513.07 00:29:05.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5042.60 19.70 12694.74 2304.45 61928.48 00:29:05.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5353.90 20.91 11957.13 2899.86 57415.32 00:29:05.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5003.60 19.55 12790.37 2227.03 58277.41 00:29:05.823 ======================================================== 00:29:05.823 Total : 20977.40 81.94 12208.13 2227.03 61928.48 00:29:05.823 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.823 rmmod nvme_tcp 00:29:05.823 rmmod nvme_fabrics 00:29:05.823 rmmod nvme_keyring 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3550229 ']' 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3550229 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3550229 ']' 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3550229 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3550229 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3550229' 00:29:05.823 killing process with pid 3550229 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3550229 00:29:05.823 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3550229 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.201 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:09.736 00:29:09.736 real 0m49.070s 00:29:09.736 user 2m52.022s 00:29:09.736 sys 0m10.377s 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:09.736 ************************************ 00:29:09.736 END TEST nvmf_perf_adq 00:29:09.736 ************************************ 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:09.736 ************************************ 00:29:09.736 START TEST nvmf_shutdown 00:29:09.736 ************************************ 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:09.736 * Looking for test storage... 00:29:09.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:09.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.736 --rc genhtml_branch_coverage=1 00:29:09.736 --rc genhtml_function_coverage=1 00:29:09.736 --rc genhtml_legend=1 00:29:09.736 --rc geninfo_all_blocks=1 00:29:09.736 --rc geninfo_unexecuted_blocks=1 00:29:09.736 00:29:09.736 ' 00:29:09.736 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:09.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.736 --rc genhtml_branch_coverage=1 00:29:09.736 --rc genhtml_function_coverage=1 00:29:09.736 --rc genhtml_legend=1 00:29:09.736 --rc geninfo_all_blocks=1 00:29:09.736 --rc geninfo_unexecuted_blocks=1 00:29:09.737 00:29:09.737 ' 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:09.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.737 --rc genhtml_branch_coverage=1 00:29:09.737 --rc genhtml_function_coverage=1 00:29:09.737 --rc genhtml_legend=1 00:29:09.737 --rc geninfo_all_blocks=1 00:29:09.737 --rc geninfo_unexecuted_blocks=1 00:29:09.737 00:29:09.737 ' 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:09.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.737 --rc genhtml_branch_coverage=1 00:29:09.737 --rc genhtml_function_coverage=1 00:29:09.737 --rc genhtml_legend=1 00:29:09.737 --rc geninfo_all_blocks=1 00:29:09.737 --rc geninfo_unexecuted_blocks=1 00:29:09.737 00:29:09.737 ' 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.737 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.738 ************************************ 00:29:09.738 START TEST nvmf_shutdown_tc1 00:29:09.738 ************************************ 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.738 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.640 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.640 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.640 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.640 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.640 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:11.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:11.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:11.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:11.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.641 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:29:11.642 00:29:11.642 --- 10.0.0.2 ping statistics --- 00:29:11.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.642 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:11.642 00:29:11.642 --- 10.0.0.1 ping statistics --- 00:29:11.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.642 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3553688 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3553688 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3553688 ']' 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:11.642 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.642 [2024-11-10 00:02:37.741834] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:11.642 [2024-11-10 00:02:37.742008] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.901 [2024-11-10 00:02:37.894398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.901 [2024-11-10 00:02:38.023009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.901 [2024-11-10 00:02:38.023075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.901 [2024-11-10 00:02:38.023096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.901 [2024-11-10 00:02:38.023116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.901 [2024-11-10 00:02:38.023133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.901 [2024-11-10 00:02:38.025793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.901 [2024-11-10 00:02:38.025859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.901 [2024-11-10 00:02:38.025906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.901 [2024-11-10 00:02:38.025927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.834 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.835 [2024-11-10 00:02:38.779918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.835 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.835 Malloc1 00:29:12.835 [2024-11-10 00:02:38.927390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.835 Malloc2 00:29:13.094 Malloc3 00:29:13.094 Malloc4 00:29:13.351 Malloc5 00:29:13.351 Malloc6 00:29:13.351 Malloc7 00:29:13.609 Malloc8 00:29:13.609 Malloc9 00:29:13.868 Malloc10 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3554000 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3554000 /var/tmp/bdevperf.sock 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3554000 ']' 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.868 EOF 00:29:13.868 )") 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.868 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.868 { 00:29:13.868 "params": { 00:29:13.868 "name": "Nvme$subsystem", 00:29:13.868 "trtype": "$TEST_TRANSPORT", 00:29:13.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.868 "adrfam": "ipv4", 00:29:13.868 "trsvcid": "$NVMF_PORT", 00:29:13.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.868 "hdgst": ${hdgst:-false}, 00:29:13.868 "ddgst": ${ddgst:-false} 00:29:13.868 }, 00:29:13.868 "method": "bdev_nvme_attach_controller" 00:29:13.868 } 00:29:13.869 EOF 00:29:13.869 )") 00:29:13.869 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:13.869 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:13.869 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:13.869 00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme1", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme2", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme3", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme4", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme5", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme6", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme7", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme8", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme9", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 },{ 00:29:13.869 "params": { 00:29:13.869 "name": "Nvme10", 00:29:13.869 "trtype": "tcp", 00:29:13.869 "traddr": "10.0.0.2", 00:29:13.869 "adrfam": "ipv4", 00:29:13.869 "trsvcid": "4420", 00:29:13.869 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:13.869 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:13.869 "hdgst": false, 00:29:13.869 "ddgst": false 00:29:13.869 }, 00:29:13.869 "method": "bdev_nvme_attach_controller" 00:29:13.869 }' 00:29:13.869 [2024-11-10 00:02:39.940290] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:13.869 [2024-11-10 00:02:39.940440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:14.127 [2024-11-10 00:02:40.091146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.127 [2024-11-10 00:02:40.223412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3554000 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:16.025 00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:16.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3554000 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:16.957 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3553688 00:29:16.957 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.958 "method": "bdev_nvme_attach_controller" 00:29:16.958 } 00:29:16.958 EOF 00:29:16.958 )") 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.958 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.958 { 00:29:16.958 "params": { 00:29:16.958 "name": "Nvme$subsystem", 00:29:16.958 "trtype": "$TEST_TRANSPORT", 00:29:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.958 "adrfam": "ipv4", 00:29:16.958 "trsvcid": "$NVMF_PORT", 00:29:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.958 "hdgst": ${hdgst:-false}, 00:29:16.958 "ddgst": ${ddgst:-false} 00:29:16.958 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 } 00:29:16.959 EOF 00:29:16.959 )") 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.959 { 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme$subsystem", 00:29:16.959 "trtype": "$TEST_TRANSPORT", 00:29:16.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "$NVMF_PORT", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.959 "hdgst": ${hdgst:-false}, 00:29:16.959 "ddgst": ${ddgst:-false} 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 } 00:29:16.959 EOF 00:29:16.959 )") 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.959 { 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme$subsystem", 00:29:16.959 "trtype": "$TEST_TRANSPORT", 00:29:16.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "$NVMF_PORT", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.959 "hdgst": ${hdgst:-false}, 00:29:16.959 "ddgst": ${ddgst:-false} 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 } 00:29:16.959 EOF 00:29:16.959 )") 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:16.959 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme1", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme2", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme3", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme4", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme5", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme6", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme7", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme8", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.959 "adrfam": "ipv4", 00:29:16.959 "trsvcid": "4420", 00:29:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:16.959 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:16.959 "hdgst": false, 00:29:16.959 "ddgst": false 00:29:16.959 }, 00:29:16.959 "method": "bdev_nvme_attach_controller" 00:29:16.959 },{ 00:29:16.959 "params": { 00:29:16.959 "name": "Nvme9", 00:29:16.959 "trtype": "tcp", 00:29:16.959 "traddr": "10.0.0.2", 00:29:16.960 "adrfam": "ipv4", 00:29:16.960 "trsvcid": "4420", 00:29:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:16.960 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:16.960 "hdgst": false, 00:29:16.960 "ddgst": false 00:29:16.960 }, 00:29:16.960 "method": "bdev_nvme_attach_controller" 00:29:16.960 },{ 00:29:16.960 "params": { 00:29:16.960 "name": "Nvme10", 00:29:16.960 "trtype": "tcp", 00:29:16.960 "traddr": "10.0.0.2", 00:29:16.960 "adrfam": "ipv4", 00:29:16.960 "trsvcid": "4420", 00:29:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:16.960 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:16.960 "hdgst": false, 00:29:16.960 "ddgst": false 00:29:16.960 }, 00:29:16.960 "method": "bdev_nvme_attach_controller" 00:29:16.960 }' 00:29:16.960 [2024-11-10 00:02:42.963376] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:16.960 [2024-11-10 00:02:42.963531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554415 ] 00:29:16.960 [2024-11-10 00:02:43.104147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.217 [2024-11-10 00:02:43.234153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.114 Running I/O for 1 seconds... 00:29:20.048 1472.00 IOPS, 92.00 MiB/s 00:29:20.048 Latency(us) 00:29:20.048 [2024-11-09T23:02:46.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.048 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme1n1 : 1.12 171.85 10.74 0.00 0.00 368449.86 42719.76 293601.28 00:29:20.048 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme2n1 : 1.21 211.40 13.21 0.00 0.00 291625.53 22136.60 292047.83 00:29:20.048 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme3n1 : 1.08 177.80 11.11 0.00 0.00 342141.66 19612.25 315349.52 00:29:20.048 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme4n1 : 1.20 213.64 13.35 0.00 0.00 281435.59 20583.16 307582.29 00:29:20.048 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme5n1 : 1.16 165.30 10.33 0.00 0.00 356607.62 23884.23 349525.33 00:29:20.048 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme6n1 : 1.23 207.87 12.99 0.00 0.00 279881.20 24466.77 299815.06 00:29:20.048 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme7n1 : 1.24 207.29 12.96 0.00 0.00 275843.79 19806.44 301368.51 00:29:20.048 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme8n1 : 1.22 209.76 13.11 0.00 0.00 267289.41 20194.80 330883.98 00:29:20.048 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.048 Verification LBA range: start 0x0 length 0x400 00:29:20.048 Nvme9n1 : 1.21 212.00 13.25 0.00 0.00 258341.93 37671.06 278066.82 00:29:20.049 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:20.049 Verification LBA range: start 0x0 length 0x400 00:29:20.049 Nvme10n1 : 1.24 206.08 12.88 0.00 0.00 262914.65 22039.51 323116.75 00:29:20.049 [2024-11-09T23:02:46.250Z] =================================================================================================================== 00:29:20.049 [2024-11-09T23:02:46.250Z] Total : 1982.98 123.94 0.00 0.00 293808.81 19612.25 349525.33 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.982 rmmod nvme_tcp 00:29:20.982 rmmod nvme_fabrics 00:29:20.982 rmmod nvme_keyring 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3553688 ']' 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3553688 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3553688 ']' 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3553688 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:20.982 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3553688 00:29:21.247 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:21.247 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:21.247 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3553688' 00:29:21.247 killing process with pid 3553688 00:29:21.247 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3553688 00:29:21.247 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3553688 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.777 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.308 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.308 00:29:26.308 real 0m16.434s 00:29:26.308 user 0m52.183s 00:29:26.308 sys 0m3.745s 00:29:26.308 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:26.308 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:26.308 ************************************ 00:29:26.308 END TEST nvmf_shutdown_tc1 00:29:26.308 ************************************ 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.308 ************************************ 00:29:26.308 START TEST nvmf_shutdown_tc2 00:29:26.308 ************************************ 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.308 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.309 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:29:26.309 00:29:26.309 --- 10.0.0.2 ping statistics --- 00:29:26.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.309 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:29:26.309 00:29:26.309 --- 10.0.0.1 ping statistics --- 00:29:26.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.309 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.309 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3555579 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3555579 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3555579 ']' 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:26.310 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.310 [2024-11-10 00:02:52.285518] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:26.310 [2024-11-10 00:02:52.285686] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.310 [2024-11-10 00:02:52.428226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.568 [2024-11-10 00:02:52.565539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.568 [2024-11-10 00:02:52.565633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.568 [2024-11-10 00:02:52.565660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.568 [2024-11-10 00:02:52.565684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.568 [2024-11-10 00:02:52.565704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.568 [2024-11-10 00:02:52.568493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.568 [2024-11-10 00:02:52.568619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.568 [2024-11-10 00:02:52.568714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.568 [2024-11-10 00:02:52.568718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.133 [2024-11-10 00:02:53.266287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.133 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.392 Malloc1 00:29:27.392 [2024-11-10 00:02:53.425148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.392 Malloc2 00:29:27.650 Malloc3 00:29:27.650 Malloc4 00:29:27.650 Malloc5 00:29:27.907 Malloc6 00:29:27.907 Malloc7 00:29:28.165 Malloc8 00:29:28.165 Malloc9 00:29:28.165 Malloc10 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3555888 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3555888 /var/tmp/bdevperf.sock 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3555888 ']' 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.165 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.165 { 00:29:28.165 "params": { 00:29:28.165 "name": "Nvme$subsystem", 00:29:28.165 "trtype": "$TEST_TRANSPORT", 00:29:28.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.165 "adrfam": "ipv4", 00:29:28.165 "trsvcid": "$NVMF_PORT", 00:29:28.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.165 "hdgst": ${hdgst:-false}, 00:29:28.165 "ddgst": ${ddgst:-false} 00:29:28.165 }, 00:29:28.165 "method": "bdev_nvme_attach_controller" 00:29:28.165 } 00:29:28.165 EOF 00:29:28.165 )") 00:29:28.166 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.166 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.166 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.166 { 00:29:28.166 "params": { 00:29:28.166 "name": "Nvme$subsystem", 00:29:28.166 "trtype": "$TEST_TRANSPORT", 00:29:28.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.166 "adrfam": "ipv4", 00:29:28.166 "trsvcid": "$NVMF_PORT", 00:29:28.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.166 "hdgst": ${hdgst:-false}, 00:29:28.166 "ddgst": ${ddgst:-false} 00:29:28.166 }, 00:29:28.166 "method": "bdev_nvme_attach_controller" 00:29:28.166 } 00:29:28.166 EOF 00:29:28.166 )") 00:29:28.166 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.423 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.423 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.423 { 00:29:28.423 "params": { 00:29:28.423 "name": "Nvme$subsystem", 00:29:28.423 "trtype": "$TEST_TRANSPORT", 00:29:28.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.423 "adrfam": "ipv4", 00:29:28.423 "trsvcid": "$NVMF_PORT", 00:29:28.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.423 "hdgst": ${hdgst:-false}, 00:29:28.423 "ddgst": ${ddgst:-false} 00:29:28.423 }, 00:29:28.423 "method": "bdev_nvme_attach_controller" 00:29:28.423 } 00:29:28.423 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.424 { 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme$subsystem", 00:29:28.424 "trtype": "$TEST_TRANSPORT", 00:29:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "$NVMF_PORT", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.424 "hdgst": ${hdgst:-false}, 00:29:28.424 "ddgst": ${ddgst:-false} 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 } 00:29:28.424 EOF 00:29:28.424 )") 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:28.424 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme1", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme2", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme3", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme4", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme5", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme6", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.424 "name": "Nvme7", 00:29:28.424 "trtype": "tcp", 00:29:28.424 "traddr": "10.0.0.2", 00:29:28.424 "adrfam": "ipv4", 00:29:28.424 "trsvcid": "4420", 00:29:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:28.424 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:28.424 "hdgst": false, 00:29:28.424 "ddgst": false 00:29:28.424 }, 00:29:28.424 "method": "bdev_nvme_attach_controller" 00:29:28.424 },{ 00:29:28.424 "params": { 00:29:28.425 "name": "Nvme8", 00:29:28.425 "trtype": "tcp", 00:29:28.425 "traddr": "10.0.0.2", 00:29:28.425 "adrfam": "ipv4", 00:29:28.425 "trsvcid": "4420", 00:29:28.425 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:28.425 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:28.425 "hdgst": false, 00:29:28.425 "ddgst": false 00:29:28.425 }, 00:29:28.425 "method": "bdev_nvme_attach_controller" 00:29:28.425 },{ 00:29:28.425 "params": { 00:29:28.425 "name": "Nvme9", 00:29:28.425 "trtype": "tcp", 00:29:28.425 "traddr": "10.0.0.2", 00:29:28.425 "adrfam": "ipv4", 00:29:28.425 "trsvcid": "4420", 00:29:28.425 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:28.425 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:28.425 "hdgst": false, 00:29:28.425 "ddgst": false 00:29:28.425 }, 00:29:28.425 "method": "bdev_nvme_attach_controller" 00:29:28.425 },{ 00:29:28.425 "params": { 00:29:28.425 "name": "Nvme10", 00:29:28.425 "trtype": "tcp", 00:29:28.425 "traddr": "10.0.0.2", 00:29:28.425 "adrfam": "ipv4", 00:29:28.425 "trsvcid": "4420", 00:29:28.425 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:28.425 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:28.425 "hdgst": false, 00:29:28.425 "ddgst": false 00:29:28.425 }, 00:29:28.425 "method": "bdev_nvme_attach_controller" 00:29:28.425 }' 00:29:28.425 [2024-11-10 00:02:54.444427] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:28.425 [2024-11-10 00:02:54.444581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555888 ] 00:29:28.425 [2024-11-10 00:02:54.581868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.683 [2024-11-10 00:02:54.710902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.585 Running I/O for 10 seconds... 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.150 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.151 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:31.151 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:31.151 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3555888 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3555888 ']' 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3555888 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3555888 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3555888' 00:29:31.409 killing process with pid 3555888 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3555888 00:29:31.409 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3555888 00:29:31.671 1445.00 IOPS, 90.31 MiB/s [2024-11-09T23:02:57.872Z] Received shutdown signal, test time was about 1.103141 seconds 00:29:31.671 00:29:31.671 Latency(us) 00:29:31.671 [2024-11-09T23:02:57.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme1n1 : 1.05 188.21 11.76 0.00 0.00 334182.90 5606.97 309135.74 00:29:31.671 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme2n1 : 1.10 233.00 14.56 0.00 0.00 264017.92 40777.96 262532.36 00:29:31.671 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme3n1 : 1.02 187.82 11.74 0.00 0.00 321504.14 21554.06 313796.08 00:29:31.671 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme4n1 : 1.10 232.25 14.52 0.00 0.00 257710.08 23884.23 327777.09 00:29:31.671 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme5n1 : 1.08 178.17 11.14 0.00 0.00 328910.06 24078.41 323116.75 00:29:31.671 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme6n1 : 1.07 184.46 11.53 0.00 0.00 309148.18 6310.87 304475.40 00:29:31.671 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme7n1 : 1.06 184.31 11.52 0.00 0.00 302786.72 6456.51 288940.94 00:29:31.671 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme8n1 : 1.05 187.92 11.75 0.00 0.00 286968.95 6456.51 312242.63 00:29:31.671 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme9n1 : 1.09 176.24 11.02 0.00 0.00 306415.19 25826.04 335544.32 00:29:31.671 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:31.671 Verification LBA range: start 0x0 length 0x400 00:29:31.671 Nvme10n1 : 1.09 176.88 11.06 0.00 0.00 298644.42 35146.71 327777.09 00:29:31.671 [2024-11-09T23:02:57.872Z] =================================================================================================================== 00:29:31.671 [2024-11-09T23:02:57.872Z] Total : 1929.25 120.58 0.00 0.00 298610.86 5606.97 335544.32 00:29:32.607 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3555579 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.592 rmmod nvme_tcp 00:29:33.592 rmmod nvme_fabrics 00:29:33.592 rmmod nvme_keyring 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3555579 ']' 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3555579 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3555579 ']' 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3555579 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:33.592 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3555579 00:29:33.894 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:33.894 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:33.894 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3555579' 00:29:33.894 killing process with pid 3555579 00:29:33.894 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3555579 00:29:33.894 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3555579 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.437 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.978 00:29:38.978 real 0m12.544s 00:29:38.978 user 0m42.586s 00:29:38.978 sys 0m1.997s 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.978 ************************************ 00:29:38.978 END TEST nvmf_shutdown_tc2 00:29:38.978 ************************************ 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.978 ************************************ 00:29:38.978 START TEST nvmf_shutdown_tc3 00:29:38.978 ************************************ 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.978 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:38.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:38.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.979 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:38.980 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:38.980 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:29:38.980 00:29:38.980 --- 10.0.0.2 ping statistics --- 00:29:38.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.980 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:29:38.980 00:29:38.980 --- 10.0.0.1 ping statistics --- 00:29:38.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.980 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3557308 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3557308 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3557308 ']' 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:38.980 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.981 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:38.981 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.981 [2024-11-10 00:03:04.986277] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:38.981 [2024-11-10 00:03:04.986440] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.981 [2024-11-10 00:03:05.149706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.247 [2024-11-10 00:03:05.297339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.247 [2024-11-10 00:03:05.297421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.247 [2024-11-10 00:03:05.297448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.247 [2024-11-10 00:03:05.297472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.247 [2024-11-10 00:03:05.297492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.247 [2024-11-10 00:03:05.300341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.247 [2024-11-10 00:03:05.300444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.247 [2024-11-10 00:03:05.300489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.247 [2024-11-10 00:03:05.300494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.813 [2024-11-10 00:03:05.965042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.813 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.072 Malloc1 00:29:40.072 [2024-11-10 00:03:06.101097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.072 Malloc2 00:29:40.330 Malloc3 00:29:40.330 Malloc4 00:29:40.330 Malloc5 00:29:40.588 Malloc6 00:29:40.588 Malloc7 00:29:40.588 Malloc8 00:29:40.846 Malloc9 00:29:40.846 Malloc10 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3557636 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3557636 /var/tmp/bdevperf.sock 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3557636 ']' 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.846 { 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme$subsystem", 00:29:40.846 "trtype": "$TEST_TRANSPORT", 00:29:40.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "$NVMF_PORT", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.846 "hdgst": ${hdgst:-false}, 00:29:40.846 "ddgst": ${ddgst:-false} 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 } 00:29:40.846 EOF 00:29:40.846 )") 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.846 { 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme$subsystem", 00:29:40.846 "trtype": "$TEST_TRANSPORT", 00:29:40.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "$NVMF_PORT", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.846 "hdgst": ${hdgst:-false}, 00:29:40.846 "ddgst": ${ddgst:-false} 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 } 00:29:40.846 EOF 00:29:40.846 )") 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:40.846 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:40.846 { 00:29:40.846 "params": { 00:29:40.846 "name": "Nvme$subsystem", 00:29:40.846 "trtype": "$TEST_TRANSPORT", 00:29:40.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.846 "adrfam": "ipv4", 00:29:40.846 "trsvcid": "$NVMF_PORT", 00:29:40.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.846 "hdgst": ${hdgst:-false}, 00:29:40.846 "ddgst": ${ddgst:-false} 00:29:40.846 }, 00:29:40.846 "method": "bdev_nvme_attach_controller" 00:29:40.846 } 00:29:40.846 EOF 00:29:40.846 )") 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.105 { 00:29:41.105 "params": { 00:29:41.105 "name": "Nvme$subsystem", 00:29:41.105 "trtype": "$TEST_TRANSPORT", 00:29:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.105 "adrfam": "ipv4", 00:29:41.105 "trsvcid": "$NVMF_PORT", 00:29:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.105 "hdgst": ${hdgst:-false}, 00:29:41.105 "ddgst": ${ddgst:-false} 00:29:41.105 }, 00:29:41.105 "method": "bdev_nvme_attach_controller" 00:29:41.105 } 00:29:41.105 EOF 00:29:41.105 )") 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.105 { 00:29:41.105 "params": { 00:29:41.105 "name": "Nvme$subsystem", 00:29:41.105 "trtype": "$TEST_TRANSPORT", 00:29:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.105 "adrfam": "ipv4", 00:29:41.105 "trsvcid": "$NVMF_PORT", 00:29:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.105 "hdgst": ${hdgst:-false}, 00:29:41.105 "ddgst": ${ddgst:-false} 00:29:41.105 }, 00:29:41.105 "method": "bdev_nvme_attach_controller" 00:29:41.105 } 00:29:41.105 EOF 00:29:41.105 )") 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.105 { 00:29:41.105 "params": { 00:29:41.105 "name": "Nvme$subsystem", 00:29:41.105 "trtype": "$TEST_TRANSPORT", 00:29:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.105 "adrfam": "ipv4", 00:29:41.105 "trsvcid": "$NVMF_PORT", 00:29:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.105 "hdgst": ${hdgst:-false}, 00:29:41.105 "ddgst": ${ddgst:-false} 00:29:41.105 }, 00:29:41.105 "method": "bdev_nvme_attach_controller" 00:29:41.105 } 00:29:41.105 EOF 00:29:41.105 )") 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.105 { 00:29:41.105 "params": { 00:29:41.105 "name": "Nvme$subsystem", 00:29:41.105 "trtype": "$TEST_TRANSPORT", 00:29:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.105 "adrfam": "ipv4", 00:29:41.105 "trsvcid": "$NVMF_PORT", 00:29:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.105 "hdgst": ${hdgst:-false}, 00:29:41.105 "ddgst": ${ddgst:-false} 00:29:41.105 }, 00:29:41.105 "method": "bdev_nvme_attach_controller" 00:29:41.105 } 00:29:41.105 EOF 00:29:41.105 )") 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.105 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.105 { 00:29:41.105 "params": { 00:29:41.105 "name": "Nvme$subsystem", 00:29:41.105 "trtype": "$TEST_TRANSPORT", 00:29:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.105 "adrfam": "ipv4", 00:29:41.105 "trsvcid": "$NVMF_PORT", 00:29:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.105 "hdgst": ${hdgst:-false}, 00:29:41.105 "ddgst": ${ddgst:-false} 00:29:41.105 }, 00:29:41.105 "method": "bdev_nvme_attach_controller" 00:29:41.106 } 00:29:41.106 EOF 00:29:41.106 )") 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.106 { 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme$subsystem", 00:29:41.106 "trtype": "$TEST_TRANSPORT", 00:29:41.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "$NVMF_PORT", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.106 "hdgst": ${hdgst:-false}, 00:29:41.106 "ddgst": ${ddgst:-false} 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 } 00:29:41.106 EOF 00:29:41.106 )") 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.106 { 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme$subsystem", 00:29:41.106 "trtype": "$TEST_TRANSPORT", 00:29:41.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "$NVMF_PORT", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.106 "hdgst": ${hdgst:-false}, 00:29:41.106 "ddgst": ${ddgst:-false} 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 } 00:29:41.106 EOF 00:29:41.106 )") 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:41.106 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme1", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme2", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme3", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme4", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme5", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme6", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme7", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme8", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme9", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 },{ 00:29:41.106 "params": { 00:29:41.106 "name": "Nvme10", 00:29:41.106 "trtype": "tcp", 00:29:41.106 "traddr": "10.0.0.2", 00:29:41.106 "adrfam": "ipv4", 00:29:41.106 "trsvcid": "4420", 00:29:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:41.106 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:41.106 "hdgst": false, 00:29:41.106 "ddgst": false 00:29:41.106 }, 00:29:41.106 "method": "bdev_nvme_attach_controller" 00:29:41.106 }' 00:29:41.106 [2024-11-10 00:03:07.132713] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:41.106 [2024-11-10 00:03:07.132853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3557636 ] 00:29:41.106 [2024-11-10 00:03:07.282072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.365 [2024-11-10 00:03:07.410688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.262 Running I/O for 10 seconds... 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:43.829 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:43.830 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3557308 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3557308 ']' 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3557308 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:44.087 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3557308 00:29:44.364 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:44.364 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:44.364 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3557308' 00:29:44.364 killing process with pid 3557308 00:29:44.364 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3557308 00:29:44.364 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3557308 00:29:44.364 [2024-11-10 00:03:10.313555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.313977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.364 [2024-11-10 00:03:10.314808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.314825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.314842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.314860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.317991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.318957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.321981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.365 [2024-11-10 00:03:10.322213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.322772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.326985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.327649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.330403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.330446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.330469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.330489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.330508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.366 [2024-11-10 00:03:10.330525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.330990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-10 00:03:10.331418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(6) to be set 00:29:44.367 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-10 00:03:10.331528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is with the state(6) to be set 00:29:44.367 same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.331843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.331864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.331978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.332225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.332491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.367 [2024-11-10 00:03:10.332698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.367 [2024-11-10 00:03:10.332717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.333985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.367 [2024-11-10 00:03:10.334177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.334821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.337897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.337954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.338994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.339397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.341435] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.368 [2024-11-10 00:03:10.341536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:44.368 [2024-11-10 00:03:10.341642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:44.368 [2024-11-10 00:03:10.341729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.341773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.341805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.341852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.341874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.341900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.341922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.341943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.342016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.342235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.368 [2024-11-10 00:03:10.342282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.368 [2024-11-10 00:03:10.342336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:44.368 [2024-11-10 00:03:10.342410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.368 [2024-11-10 00:03:10.342581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.342733] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.368 [2024-11-10 00:03:10.342822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.342854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.342929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.342965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.342988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.343000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-10 00:03:10.343013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128with the state(6) to be set 00:29:44.368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.343038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.343040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-10 00:03:10.343064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128with the state(6) to be set 00:29:44.368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.343085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.343103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.343128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 [2024-11-10 00:03:10.343148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.368 [2024-11-10 00:03:10.343167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-10 00:03:10.343185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.368 with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.368 [2024-11-10 00:03:10.343210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-10 00:03:10.343269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12with the state(6) to be set 00:29:44.369 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-10 00:03:10.343702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:44.369 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-10 00:03:10.343813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12[2024-11-10 00:03:10.343897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.343936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.343960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.343976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-10 00:03:10.343979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-11-10 00:03:10.344057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12with the state(6) to be set 00:29:44.369 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12[2024-11-10 00:03:10.344203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:44.369 [2024-11-10 00:03:10.344273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.344956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.369 [2024-11-10 00:03:10.345722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.369 [2024-11-10 00:03:10.345747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.345769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.345794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.345840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.345873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.345897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.345919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.345943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.345965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.345990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.346012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.346037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.346058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.346444] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.370 [2024-11-10 00:03:10.346547] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.370 [2024-11-10 00:03:10.346548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.346989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.347806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:44.370 [2024-11-10 00:03:10.348074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.348987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.370 [2024-11-10 00:03:10.349566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.370 [2024-11-10 00:03:10.349606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.349952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.349994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.350960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.350982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.351030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.351077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.351138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.351184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.351228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:44.371 [2024-11-10 00:03:10.351655] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:44.371 [2024-11-10 00:03:10.351734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.351763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.351807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.351850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.351896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.351915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:44.371 [2024-11-10 00:03:10.351980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:44.371 [2024-11-10 00:03:10.352031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:44.371 [2024-11-10 00:03:10.352105] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:44.371 [2024-11-10 00:03:10.352148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:44.371 [2024-11-10 00:03:10.352234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.352263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.352286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.352308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.352329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.352350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.352371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.371 [2024-11-10 00:03:10.352392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.352412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:44.371 [2024-11-10 00:03:10.353823] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.371 [2024-11-10 00:03:10.354202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.354953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.354976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.371 [2024-11-10 00:03:10.355795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.371 [2024-11-10 00:03:10.355819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.355845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.355896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.355934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.355960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.355982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.356956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.356982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.357418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.357440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:44.372 [2024-11-10 00:03:10.359224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.359959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.359984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.360962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.372 [2024-11-10 00:03:10.360987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.372 [2024-11-10 00:03:10.361010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.361960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.361984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.362334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.362357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:44.373 [2024-11-10 00:03:10.364043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.364956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.364978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.365954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.365979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.373 [2024-11-10 00:03:10.366467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.373 [2024-11-10 00:03:10.366489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.366957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.366980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.367307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.367329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:44.374 [2024-11-10 00:03:10.368967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.369960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.369982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.370964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.370999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.374 [2024-11-10 00:03:10.371416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.374 [2024-11-10 00:03:10.371437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.371975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.371999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.372020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.372044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.372065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.372091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:44.375 [2024-11-10 00:03:10.375232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:44.375 [2024-11-10 00:03:10.375546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.375 [2024-11-10 00:03:10.375598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:44.375 [2024-11-10 00:03:10.375627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.375 [2024-11-10 00:03:10.375669] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:44.375 [2024-11-10 00:03:10.375716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:44.375 [2024-11-10 00:03:10.375771] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:44.375 [2024-11-10 00:03:10.375829] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:44.375 [2024-11-10 00:03:10.375872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:44.375 [2024-11-10 00:03:10.375955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.375 [2024-11-10 00:03:10.376712] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.375 [2024-11-10 00:03:10.377516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:44.375 [2024-11-10 00:03:10.377556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:44.375 [2024-11-10 00:03:10.377583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:44.375 [2024-11-10 00:03:10.377827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.375 [2024-11-10 00:03:10.377866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:44.375 [2024-11-10 00:03:10.377891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.375 [2024-11-10 00:03:10.379457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.379958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.379979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.380956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.381956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.381977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.382001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.382022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.382046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.382067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.382089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.375 [2024-11-10 00:03:10.382111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.375 [2024-11-10 00:03:10.382135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.382576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.382610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:44.376 [2024-11-10 00:03:10.384178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.384982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.385970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.385993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.386962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.386983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.376 [2024-11-10 00:03:10.387314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.376 [2024-11-10 00:03:10.387336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:44.376 [2024-11-10 00:03:10.388918] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.376 [2024-11-10 00:03:10.389693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:44.376 [2024-11-10 00:03:10.389741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:44.376 [2024-11-10 00:03:10.389774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:44.377 [2024-11-10 00:03:10.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.377 [2024-11-10 00:03:10.390052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:44.377 [2024-11-10 00:03:10.390077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:44.377 [2024-11-10 00:03:10.390196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.377 [2024-11-10 00:03:10.390232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:44.377 [2024-11-10 00:03:10.390256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:44.377 [2024-11-10 00:03:10.390382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.377 [2024-11-10 00:03:10.390417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:44.377 [2024-11-10 00:03:10.390449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:44.377 [2024-11-10 00:03:10.390477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.377 [2024-11-10 00:03:10.390505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:44.377 [2024-11-10 00:03:10.390528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:44.377 [2024-11-10 00:03:10.390552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:44.377 [2024-11-10 00:03:10.390575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:44.377 [2024-11-10 00:03:10.390658] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:44.377 [2024-11-10 00:03:10.390703] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:44.377 [2024-11-10 00:03:10.390761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:44.377 [2024-11-10 00:03:10.390801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:44.377 [2024-11-10 00:03:10.390837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:44.377 [2024-11-10 00:03:10.391316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.391974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.391998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.392975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.392997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.393975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.393996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.394020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.394041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.394064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.394085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.394108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.394129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.394152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.394173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.394196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.394218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.377 [2024-11-10 00:03:10.394241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.377 [2024-11-10 00:03:10.394263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.394286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.394306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.394329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.394350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.394374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.394394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.394415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:44.378 [2024-11-10 00:03:10.396025] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:44.378 [2024-11-10 00:03:10.396212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:44.378 [2024-11-10 00:03:10.396451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.378 [2024-11-10 00:03:10.396488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:44.378 [2024-11-10 00:03:10.396530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:44.378 [2024-11-10 00:03:10.396643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.378 [2024-11-10 00:03:10.396678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:44.378 [2024-11-10 00:03:10.396702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:44.378 [2024-11-10 00:03:10.396804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.378 [2024-11-10 00:03:10.396838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:44.378 [2024-11-10 00:03:10.396862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:44.378 [2024-11-10 00:03:10.396890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:44.378 [2024-11-10 00:03:10.396911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:44.378 [2024-11-10 00:03:10.396937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:44.378 [2024-11-10 00:03:10.396957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:44.378 [2024-11-10 00:03:10.398035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.398962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.398986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.399967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.399990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.400999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.401022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.401044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.401067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.401088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.401111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.401132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.401155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.378 [2024-11-10 00:03:10.401176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.378 [2024-11-10 00:03:10.401197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:44.378 [2024-11-10 00:03:10.405936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:44.378 task offset: 16896 on job bdev=Nvme2n1 fails 00:29:44.378 00:29:44.378 Latency(us) 00:29:44.378 [2024-11-09T23:03:10.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.379 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme1n1 ended in about 0.93 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme1n1 : 0.93 137.37 8.59 68.69 0.00 306970.42 27573.67 282727.16 00:29:44.379 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme2n1 ended in about 0.92 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme2n1 : 0.92 143.19 8.95 69.42 0.00 290943.18 12815.93 307582.29 00:29:44.379 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme3n1 ended in about 0.94 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme3n1 : 0.94 140.92 8.81 68.32 0.00 289284.23 24855.13 281173.71 00:29:44.379 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme4n1 ended in about 0.94 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme4n1 : 0.94 135.96 8.50 67.98 0.00 290251.16 24078.41 301368.51 00:29:44.379 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme5n1 ended in about 0.95 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme5n1 : 0.95 134.46 8.40 67.23 0.00 287138.20 23884.23 304475.40 00:29:44.379 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme6n1 ended in about 0.96 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme6n1 : 0.96 133.80 8.36 66.90 0.00 282153.21 22330.79 281173.71 00:29:44.379 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme7n1 ended in about 0.96 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme7n1 : 0.96 132.81 8.30 66.41 0.00 277846.85 23204.60 282727.16 00:29:44.379 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme8n1 ended in about 0.97 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme8n1 : 0.97 157.66 9.85 65.95 0.00 242029.52 19806.44 330883.98 00:29:44.379 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme9n1 ended in about 0.94 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme9n1 : 0.94 135.70 8.48 67.85 0.00 257901.86 26020.22 321563.31 00:29:44.379 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.379 Job: Nvme10n1 ended in about 0.93 seconds with error 00:29:44.379 Verification LBA range: start 0x0 length 0x400 00:29:44.379 Nvme10n1 : 0.93 74.42 4.65 69.03 0.00 355251.11 21651.15 340204.66 00:29:44.379 [2024-11-09T23:03:10.580Z] =================================================================================================================== 00:29:44.379 [2024-11-09T23:03:10.580Z] Total : 1326.29 82.89 677.78 0.00 285283.85 12815.93 340204.66 00:29:44.379 [2024-11-10 00:03:10.493058] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:44.379 [2024-11-10 00:03:10.493170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.493503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.493549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.493578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.493627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.493665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.493697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.493723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.493744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.493768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.493793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.493817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.493836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.493855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.493875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.493896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.493914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.493959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.493979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.494049] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.494081] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.494110] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.495255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.495304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.495329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.495459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.495494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.495517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.495545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.495571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.495600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.495623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.495643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.495665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.495684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.495703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.495722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.495742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.495761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.495779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.495798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.495824] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.495868] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.495915] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.495941] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.495972] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:44.379 [2024-11-10 00:03:10.496727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.496767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.496795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.496820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.496972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.497009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.497035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.497056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.497076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.497095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.497284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.497319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.497344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:44.379 [2024-11-10 00:03:10.497518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.497556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.497579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.497733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.497768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.497791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.497893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.497927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.497950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.498081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.498116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.498139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.498162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.498181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.498206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.498227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.498249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.498269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.498288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.498307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.498495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.498531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.498555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.498685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.498720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.498743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.498865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.379 [2024-11-10 00:03:10.498899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:44.379 [2024-11-10 00:03:10.498922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:44.379 [2024-11-10 00:03:10.498950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.498980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.499010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.499039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.499111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.499162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.499191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:44.379 [2024-11-10 00:03:10.499215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.499294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.499375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.499449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.499608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.499713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:44.379 [2024-11-10 00:03:10.499790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:44.379 [2024-11-10 00:03:10.499808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:44.379 [2024-11-10 00:03:10.499827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:44.379 [2024-11-10 00:03:10.499845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:47.667 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3557636 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3557636 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3557636 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:48.231 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.232 rmmod nvme_tcp 00:29:48.232 rmmod nvme_fabrics 00:29:48.232 rmmod nvme_keyring 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3557308 ']' 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3557308 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3557308 ']' 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3557308 00:29:48.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3557308) - No such process 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3557308 is not found' 00:29:48.232 Process with pid 3557308 is not found 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.232 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.145 00:29:50.145 real 0m11.613s 00:29:50.145 user 0m33.964s 00:29:50.145 sys 0m2.052s 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:50.145 ************************************ 00:29:50.145 END TEST nvmf_shutdown_tc3 00:29:50.145 ************************************ 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.145 ************************************ 00:29:50.145 START TEST nvmf_shutdown_tc4 00:29:50.145 ************************************ 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.145 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.146 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:29:50.405 00:29:50.405 --- 10.0.0.2 ping statistics --- 00:29:50.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.405 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:50.405 00:29:50.405 --- 10.0.0.1 ping statistics --- 00:29:50.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.405 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3559303 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3559303 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3559303 ']' 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:50.405 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:50.405 [2024-11-10 00:03:16.547454] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:29:50.405 [2024-11-10 00:03:16.547623] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.662 [2024-11-10 00:03:16.693365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.662 [2024-11-10 00:03:16.831789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.662 [2024-11-10 00:03:16.831880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.663 [2024-11-10 00:03:16.831905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.663 [2024-11-10 00:03:16.831942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.663 [2024-11-10 00:03:16.831962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.663 [2024-11-10 00:03:16.834724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.663 [2024-11-10 00:03:16.834851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.663 [2024-11-10 00:03:16.834895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.663 [2024-11-10 00:03:16.834915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.596 [2024-11-10 00:03:17.572725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.596 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.596 Malloc1 00:29:51.596 [2024-11-10 00:03:17.717327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.596 Malloc2 00:29:51.854 Malloc3 00:29:51.854 Malloc4 00:29:52.112 Malloc5 00:29:52.112 Malloc6 00:29:52.112 Malloc7 00:29:52.370 Malloc8 00:29:52.370 Malloc9 00:29:52.630 Malloc10 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3559609 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:52.630 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:52.630 [2024-11-10 00:03:18.746465] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3559303 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3559303 ']' 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3559303 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3559303 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3559303' 00:29:57.919 killing process with pid 3559303 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3559303 00:29:57.919 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3559303 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 [2024-11-10 00:03:23.701961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 [2024-11-10 00:03:23.704239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 [2024-11-10 00:03:23.704535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 [2024-11-10 00:03:23.704608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 starting I/O failed: -6 00:29:57.919 [2024-11-10 00:03:23.704666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.919 Write completed with error (sct=0, sc=8) 00:29:57.919 [2024-11-10 00:03:23.704690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.919 starting I/O failed: -6 00:29:57.919 [2024-11-10 00:03:23.704709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.919 [2024-11-10 00:03:23.704727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.919 [2024-11-10 00:03:23.704745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same Write completed with error (sct=0, sc=8) 00:29:57.919 with the state(6) to be set 00:29:57.919 starting I/O failed: -6 00:29:57.919 [2024-11-10 00:03:23.704765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.704784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(6) to be set 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 [2024-11-10 00:03:23.705754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(6) to be set 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 [2024-11-10 00:03:23.705800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.705824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same Write completed with error (sct=0, sc=8) 00:29:57.920 with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.705861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(6) to be set 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 [2024-11-10 00:03:23.705893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.705911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(6) to be set 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 [2024-11-10 00:03:23.706922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.920 [2024-11-10 00:03:23.706963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 starting I/O failed: -6 00:29:57.920 [2024-11-10 00:03:23.707005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.707028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.707046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.707063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.707080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.707098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 [2024-11-10 00:03:23.707116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(6) to be set 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.920 Write completed with error (sct=0, sc=8) 00:29:57.920 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 [2024-11-10 00:03:23.716665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.921 NVMe io qpair process completion error 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 [2024-11-10 00:03:23.718791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 [2024-11-10 00:03:23.720911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.921 starting I/O failed: -6 00:29:57.921 starting I/O failed: -6 00:29:57.921 starting I/O failed: -6 00:29:57.921 starting I/O failed: -6 00:29:57.921 starting I/O failed: -6 00:29:57.921 starting I/O failed: -6 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 Write completed with error (sct=0, sc=8) 00:29:57.921 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 [2024-11-10 00:03:23.723821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 [2024-11-10 00:03:23.736834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.922 NVMe io qpair process completion error 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 starting I/O failed: -6 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.922 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 [2024-11-10 00:03:23.739082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.923 starting I/O failed: -6 00:29:57.923 starting I/O failed: -6 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 [2024-11-10 00:03:23.741239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 [2024-11-10 00:03:23.743883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.923 starting I/O failed: -6 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.923 Write completed with error (sct=0, sc=8) 00:29:57.923 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 [2024-11-10 00:03:23.761056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.924 NVMe io qpair process completion error 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 starting I/O failed: -6 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.924 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 [2024-11-10 00:03:23.763267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 [2024-11-10 00:03:23.765450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 [2024-11-10 00:03:23.768415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.925 Write completed with error (sct=0, sc=8) 00:29:57.925 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 [2024-11-10 00:03:23.777846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.926 NVMe io qpair process completion error 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 [2024-11-10 00:03:23.779944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 Write completed with error (sct=0, sc=8) 00:29:57.926 starting I/O failed: -6 00:29:57.927 [2024-11-10 00:03:23.781963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 [2024-11-10 00:03:23.784672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.927 Write completed with error (sct=0, sc=8) 00:29:57.927 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 [2024-11-10 00:03:23.794174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.928 NVMe io qpair process completion error 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 [2024-11-10 00:03:23.796254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 [2024-11-10 00:03:23.798183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 starting I/O failed: -6 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.928 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 [2024-11-10 00:03:23.800985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 [2024-11-10 00:03:23.814860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.929 NVMe io qpair process completion error 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 starting I/O failed: -6 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.929 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 [2024-11-10 00:03:23.817089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.930 starting I/O failed: -6 00:29:57.930 starting I/O failed: -6 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 [2024-11-10 00:03:23.819324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 starting I/O failed: -6 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.930 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 [2024-11-10 00:03:23.822066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 [2024-11-10 00:03:23.835027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.931 NVMe io qpair process completion error 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 [2024-11-10 00:03:23.836948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.931 starting I/O failed: -6 00:29:57.931 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 [2024-11-10 00:03:23.839032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 [2024-11-10 00:03:23.841761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.932 starting I/O failed: -6 00:29:57.932 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 [2024-11-10 00:03:23.854371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.933 NVMe io qpair process completion error 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 [2024-11-10 00:03:23.856422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 [2024-11-10 00:03:23.858739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 starting I/O failed: -6 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.933 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 [2024-11-10 00:03:23.861466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 [2024-11-10 00:03:23.873815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.934 NVMe io qpair process completion error 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 starting I/O failed: -6 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.934 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 [2024-11-10 00:03:23.875973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 [2024-11-10 00:03:23.878191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 [2024-11-10 00:03:23.880825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.935 Write completed with error (sct=0, sc=8) 00:29:57.935 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 Write completed with error (sct=0, sc=8) 00:29:57.936 starting I/O failed: -6 00:29:57.936 [2024-11-10 00:03:23.893242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.936 NVMe io qpair process completion error 00:29:57.936 Initializing NVMe Controllers 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:57.936 Controller IO queue size 128, less than required. 00:29:57.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:57.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:57.936 Initialization complete. Launching workers. 00:29:57.936 ======================================================== 00:29:57.936 Latency(us) 00:29:57.936 Device Information : IOPS MiB/s Average min max 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1402.91 60.28 91275.47 1636.88 193640.32 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1421.97 61.10 90186.25 1873.15 190662.88 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1393.06 59.86 92197.34 1691.06 216643.85 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1400.13 60.16 91960.49 1772.60 233857.65 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1423.04 61.15 90677.50 2137.52 221851.09 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1430.96 61.49 90370.58 2335.72 265748.96 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1420.90 61.05 91205.84 2121.89 281744.07 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1448.09 62.22 86033.84 2330.93 152448.08 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1449.16 62.27 86118.94 1589.52 153912.56 00:29:57.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1445.10 62.09 86565.91 1547.11 169993.12 00:29:57.936 ======================================================== 00:29:57.936 Total : 14235.33 611.67 89630.34 1547.11 281744.07 00:29:57.936 00:29:57.936 [2024-11-10 00:03:23.922467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:57.936 [2024-11-10 00:03:23.922617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:57.936 [2024-11-10 00:03:23.922703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:57.936 [2024-11-10 00:03:23.922786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:57.936 [2024-11-10 00:03:23.922868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:57.936 [2024-11-10 00:03:23.922950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:57.936 [2024-11-10 00:03:23.923032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:57.937 [2024-11-10 00:03:23.923115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:57.937 [2024-11-10 00:03:23.923197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:57.937 [2024-11-10 00:03:23.923289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:57.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:00.468 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3559609 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3559609 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3559609 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:01.413 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.414 rmmod nvme_tcp 00:30:01.414 rmmod nvme_fabrics 00:30:01.414 rmmod nvme_keyring 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3559303 ']' 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3559303 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3559303 ']' 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3559303 00:30:01.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3559303) - No such process 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3559303 is not found' 00:30:01.414 Process with pid 3559303 is not found 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.414 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.948 00:30:03.948 real 0m13.305s 00:30:03.948 user 0m35.548s 00:30:03.948 sys 0m5.793s 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:03.948 ************************************ 00:30:03.948 END TEST nvmf_shutdown_tc4 00:30:03.948 ************************************ 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:03.948 00:30:03.948 real 0m54.243s 00:30:03.948 user 2m44.455s 00:30:03.948 sys 0m13.780s 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:03.948 ************************************ 00:30:03.948 END TEST nvmf_shutdown 00:30:03.948 ************************************ 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:03.948 ************************************ 00:30:03.948 START TEST nvmf_nsid 00:30:03.948 ************************************ 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:03.948 * Looking for test storage... 00:30:03.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.948 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.949 --rc genhtml_branch_coverage=1 00:30:03.949 --rc genhtml_function_coverage=1 00:30:03.949 --rc genhtml_legend=1 00:30:03.949 --rc geninfo_all_blocks=1 00:30:03.949 --rc geninfo_unexecuted_blocks=1 00:30:03.949 00:30:03.949 ' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.949 --rc genhtml_branch_coverage=1 00:30:03.949 --rc genhtml_function_coverage=1 00:30:03.949 --rc genhtml_legend=1 00:30:03.949 --rc geninfo_all_blocks=1 00:30:03.949 --rc geninfo_unexecuted_blocks=1 00:30:03.949 00:30:03.949 ' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.949 --rc genhtml_branch_coverage=1 00:30:03.949 --rc genhtml_function_coverage=1 00:30:03.949 --rc genhtml_legend=1 00:30:03.949 --rc geninfo_all_blocks=1 00:30:03.949 --rc geninfo_unexecuted_blocks=1 00:30:03.949 00:30:03.949 ' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:03.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.949 --rc genhtml_branch_coverage=1 00:30:03.949 --rc genhtml_function_coverage=1 00:30:03.949 --rc genhtml_legend=1 00:30:03.949 --rc geninfo_all_blocks=1 00:30:03.949 --rc geninfo_unexecuted_blocks=1 00:30:03.949 00:30:03.949 ' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.949 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.911 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.911 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.912 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:30:05.913 00:30:05.913 --- 10.0.0.2 ping statistics --- 00:30:05.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.913 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:30:05.913 00:30:05.913 --- 10.0.0.1 ping statistics --- 00:30:05.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.913 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3562606 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3562606 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3562606 ']' 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:05.913 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.913 [2024-11-10 00:03:32.006845] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:05.913 [2024-11-10 00:03:32.007005] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.171 [2024-11-10 00:03:32.158970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.171 [2024-11-10 00:03:32.296279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.171 [2024-11-10 00:03:32.296368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.171 [2024-11-10 00:03:32.296395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.171 [2024-11-10 00:03:32.296419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.172 [2024-11-10 00:03:32.296439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.172 [2024-11-10 00:03:32.298092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3562757 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=02b29534-0725-4c32-88e9-feeb59fda8a7 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=50abe7a8-39e3-4887-8b74-be0bc2243af0 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9fd5065d-00b3-4631-9d22-c1babeb59f98 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:07.106 null0 00:30:07.106 null1 00:30:07.106 null2 00:30:07.106 [2024-11-10 00:03:33.076491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.106 [2024-11-10 00:03:33.100780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3562757 /var/tmp/tgt2.sock 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3562757 ']' 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:07.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:07.106 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:07.106 [2024-11-10 00:03:33.145541] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:07.106 [2024-11-10 00:03:33.145717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562757 ] 00:30:07.106 [2024-11-10 00:03:33.282029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.364 [2024-11-10 00:03:33.407476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.305 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:08.305 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:08.305 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:08.874 [2024-11-10 00:03:34.770084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.874 [2024-11-10 00:03:34.786415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:08.874 nvme0n1 nvme0n2 00:30:08.874 nvme1n1 00:30:08.874 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:08.874 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:08.874 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:30:09.440 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:30:10.374 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 02b29534-0725-4c32-88e9-feeb59fda8a7 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=02b2953407254c3288e9feeb59fda8a7 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 02B2953407254C3288E9FEEB59FDA8A7 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 02B2953407254C3288E9FEEB59FDA8A7 == \0\2\B\2\9\5\3\4\0\7\2\5\4\C\3\2\8\8\E\9\F\E\E\B\5\9\F\D\A\8\A\7 ]] 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 50abe7a8-39e3-4887-8b74-be0bc2243af0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=50abe7a839e348878b74be0bc2243af0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 50ABE7A839E348878B74BE0BC2243AF0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 50ABE7A839E348878B74BE0BC2243AF0 == \5\0\A\B\E\7\A\8\3\9\E\3\4\8\8\7\8\B\7\4\B\E\0\B\C\2\2\4\3\A\F\0 ]] 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9fd5065d-00b3-4631-9d22-c1babeb59f98 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:10.375 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9fd5065d00b346319d22c1babeb59f98 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9FD5065D00B346319D22C1BABEB59F98 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9FD5065D00B346319D22C1BABEB59F98 == \9\F\D\5\0\6\5\D\0\0\B\3\4\6\3\1\9\D\2\2\C\1\B\A\B\E\B\5\9\F\9\8 ]] 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3562757 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3562757 ']' 00:30:10.634 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3562757 00:30:10.892 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:30:10.892 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:10.893 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3562757 00:30:10.893 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:10.893 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:10.893 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3562757' 00:30:10.893 killing process with pid 3562757 00:30:10.893 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3562757 00:30:10.893 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3562757 00:30:13.422 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:13.422 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.422 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:13.422 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.423 rmmod nvme_tcp 00:30:13.423 rmmod nvme_fabrics 00:30:13.423 rmmod nvme_keyring 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3562606 ']' 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3562606 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3562606 ']' 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3562606 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3562606 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3562606' 00:30:13.423 killing process with pid 3562606 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3562606 00:30:13.423 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3562606 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.360 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.263 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.263 00:30:16.263 real 0m12.768s 00:30:16.263 user 0m15.694s 00:30:16.263 sys 0m2.922s 00:30:16.263 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:16.263 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:16.263 ************************************ 00:30:16.263 END TEST nvmf_nsid 00:30:16.263 ************************************ 00:30:16.522 00:03:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:16.522 00:30:16.522 real 18m37.457s 00:30:16.522 user 51m11.133s 00:30:16.522 sys 3m34.745s 00:30:16.522 00:03:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:16.522 00:03:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:16.522 ************************************ 00:30:16.522 END TEST nvmf_target_extra 00:30:16.522 ************************************ 00:30:16.522 00:03:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:16.522 00:03:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:16.522 00:03:42 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:16.522 00:03:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:16.522 ************************************ 00:30:16.522 START TEST nvmf_host 00:30:16.522 ************************************ 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:16.522 * Looking for test storage... 00:30:16.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.522 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:16.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.523 --rc genhtml_branch_coverage=1 00:30:16.523 --rc genhtml_function_coverage=1 00:30:16.523 --rc genhtml_legend=1 00:30:16.523 --rc geninfo_all_blocks=1 00:30:16.523 --rc geninfo_unexecuted_blocks=1 00:30:16.523 00:30:16.523 ' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:16.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.523 --rc genhtml_branch_coverage=1 00:30:16.523 --rc genhtml_function_coverage=1 00:30:16.523 --rc genhtml_legend=1 00:30:16.523 --rc geninfo_all_blocks=1 00:30:16.523 --rc geninfo_unexecuted_blocks=1 00:30:16.523 00:30:16.523 ' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:16.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.523 --rc genhtml_branch_coverage=1 00:30:16.523 --rc genhtml_function_coverage=1 00:30:16.523 --rc genhtml_legend=1 00:30:16.523 --rc geninfo_all_blocks=1 00:30:16.523 --rc geninfo_unexecuted_blocks=1 00:30:16.523 00:30:16.523 ' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:16.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.523 --rc genhtml_branch_coverage=1 00:30:16.523 --rc genhtml_function_coverage=1 00:30:16.523 --rc genhtml_legend=1 00:30:16.523 --rc geninfo_all_blocks=1 00:30:16.523 --rc geninfo_unexecuted_blocks=1 00:30:16.523 00:30:16.523 ' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.523 ************************************ 00:30:16.523 START TEST nvmf_multicontroller 00:30:16.523 ************************************ 00:30:16.523 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:16.783 * Looking for test storage... 00:30:16.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.783 --rc genhtml_branch_coverage=1 00:30:16.783 --rc genhtml_function_coverage=1 00:30:16.783 --rc genhtml_legend=1 00:30:16.783 --rc geninfo_all_blocks=1 00:30:16.783 --rc geninfo_unexecuted_blocks=1 00:30:16.783 00:30:16.783 ' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.783 --rc genhtml_branch_coverage=1 00:30:16.783 --rc genhtml_function_coverage=1 00:30:16.783 --rc genhtml_legend=1 00:30:16.783 --rc geninfo_all_blocks=1 00:30:16.783 --rc geninfo_unexecuted_blocks=1 00:30:16.783 00:30:16.783 ' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.783 --rc genhtml_branch_coverage=1 00:30:16.783 --rc genhtml_function_coverage=1 00:30:16.783 --rc genhtml_legend=1 00:30:16.783 --rc geninfo_all_blocks=1 00:30:16.783 --rc geninfo_unexecuted_blocks=1 00:30:16.783 00:30:16.783 ' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.783 --rc genhtml_branch_coverage=1 00:30:16.783 --rc genhtml_function_coverage=1 00:30:16.783 --rc genhtml_legend=1 00:30:16.783 --rc geninfo_all_blocks=1 00:30:16.783 --rc geninfo_unexecuted_blocks=1 00:30:16.783 00:30:16.783 ' 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.783 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.784 00:03:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:18.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.685 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:18.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:18.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:18.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.686 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:30:18.945 00:30:18.945 --- 10.0.0.2 ping statistics --- 00:30:18.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.945 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:30:18.945 00:30:18.945 --- 10.0.0.1 ping statistics --- 00:30:18.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.945 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:18.945 00:03:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3565597 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3565597 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3565597 ']' 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:18.945 00:03:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.945 [2024-11-10 00:03:45.094441] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:18.945 [2024-11-10 00:03:45.094616] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.203 [2024-11-10 00:03:45.267702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:19.462 [2024-11-10 00:03:45.412694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.462 [2024-11-10 00:03:45.412757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.462 [2024-11-10 00:03:45.412777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.462 [2024-11-10 00:03:45.412797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.462 [2024-11-10 00:03:45.412813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.462 [2024-11-10 00:03:45.415075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.462 [2024-11-10 00:03:45.415122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.462 [2024-11-10 00:03:45.415127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.028 [2024-11-10 00:03:46.191238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.028 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 Malloc0 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 [2024-11-10 00:03:46.311741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 [2024-11-10 00:03:46.319545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 Malloc1 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3565753 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3565753 /var/tmp/bdevperf.sock 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3565753 ']' 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:20.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:20.285 00:03:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.659 NVMe0n1 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:21.659 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.660 1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.660 request: 00:30:21.660 { 00:30:21.660 "name": "NVMe0", 00:30:21.660 "trtype": "tcp", 00:30:21.660 "traddr": "10.0.0.2", 00:30:21.660 "adrfam": "ipv4", 00:30:21.660 "trsvcid": "4420", 00:30:21.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.660 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:21.660 "hostaddr": "10.0.0.1", 00:30:21.660 "prchk_reftag": false, 00:30:21.660 "prchk_guard": false, 00:30:21.660 "hdgst": false, 00:30:21.660 "ddgst": false, 00:30:21.660 "allow_unrecognized_csi": false, 00:30:21.660 "method": "bdev_nvme_attach_controller", 00:30:21.660 "req_id": 1 00:30:21.660 } 00:30:21.660 Got JSON-RPC error response 00:30:21.660 response: 00:30:21.660 { 00:30:21.660 "code": -114, 00:30:21.660 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:21.660 } 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.660 request: 00:30:21.660 { 00:30:21.660 "name": "NVMe0", 00:30:21.660 "trtype": "tcp", 00:30:21.660 "traddr": "10.0.0.2", 00:30:21.660 "adrfam": "ipv4", 00:30:21.660 "trsvcid": "4420", 00:30:21.660 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:21.660 "hostaddr": "10.0.0.1", 00:30:21.660 "prchk_reftag": false, 00:30:21.660 "prchk_guard": false, 00:30:21.660 "hdgst": false, 00:30:21.660 "ddgst": false, 00:30:21.660 "allow_unrecognized_csi": false, 00:30:21.660 "method": "bdev_nvme_attach_controller", 00:30:21.660 "req_id": 1 00:30:21.660 } 00:30:21.660 Got JSON-RPC error response 00:30:21.660 response: 00:30:21.660 { 00:30:21.660 "code": -114, 00:30:21.660 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:21.660 } 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.660 request: 00:30:21.660 { 00:30:21.660 "name": "NVMe0", 00:30:21.660 "trtype": "tcp", 00:30:21.660 "traddr": "10.0.0.2", 00:30:21.660 "adrfam": "ipv4", 00:30:21.660 "trsvcid": "4420", 00:30:21.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.660 "hostaddr": "10.0.0.1", 00:30:21.660 "prchk_reftag": false, 00:30:21.660 "prchk_guard": false, 00:30:21.660 "hdgst": false, 00:30:21.660 "ddgst": false, 00:30:21.660 "multipath": "disable", 00:30:21.660 "allow_unrecognized_csi": false, 00:30:21.660 "method": "bdev_nvme_attach_controller", 00:30:21.660 "req_id": 1 00:30:21.660 } 00:30:21.660 Got JSON-RPC error response 00:30:21.660 response: 00:30:21.660 { 00:30:21.660 "code": -114, 00:30:21.660 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:21.660 } 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.660 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.660 request: 00:30:21.660 { 00:30:21.660 "name": "NVMe0", 00:30:21.660 "trtype": "tcp", 00:30:21.660 "traddr": "10.0.0.2", 00:30:21.660 "adrfam": "ipv4", 00:30:21.660 "trsvcid": "4420", 00:30:21.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.660 "hostaddr": "10.0.0.1", 00:30:21.660 "prchk_reftag": false, 00:30:21.660 "prchk_guard": false, 00:30:21.660 "hdgst": false, 00:30:21.660 "ddgst": false, 00:30:21.660 "multipath": "failover", 00:30:21.660 "allow_unrecognized_csi": false, 00:30:21.660 "method": "bdev_nvme_attach_controller", 00:30:21.661 "req_id": 1 00:30:21.661 } 00:30:21.661 Got JSON-RPC error response 00:30:21.661 response: 00:30:21.661 { 00:30:21.661 "code": -114, 00:30:21.661 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:21.661 } 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.661 NVMe0n1 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.661 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.919 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.919 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:21.919 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.919 00:03:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.919 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:21.919 00:03:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:23.292 { 00:30:23.292 "results": [ 00:30:23.292 { 00:30:23.292 "job": "NVMe0n1", 00:30:23.292 "core_mask": "0x1", 00:30:23.292 "workload": "write", 00:30:23.292 "status": "finished", 00:30:23.292 "queue_depth": 128, 00:30:23.292 "io_size": 4096, 00:30:23.292 "runtime": 1.00917, 00:30:23.292 "iops": 12964.119028508576, 00:30:23.292 "mibps": 50.641089955111624, 00:30:23.292 "io_failed": 0, 00:30:23.292 "io_timeout": 0, 00:30:23.292 "avg_latency_us": 9856.772766468219, 00:30:23.292 "min_latency_us": 8398.317037037037, 00:30:23.292 "max_latency_us": 21651.152592592593 00:30:23.292 } 00:30:23.292 ], 00:30:23.292 "core_count": 1 00:30:23.292 } 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3565753 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3565753 ']' 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3565753 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3565753 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3565753' 00:30:23.292 killing process with pid 3565753 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3565753 00:30:23.292 00:03:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3565753 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:24.233 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:24.233 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:24.233 [2024-11-10 00:03:46.522234] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:24.233 [2024-11-10 00:03:46.522372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565753 ] 00:30:24.233 [2024-11-10 00:03:46.663479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.233 [2024-11-10 00:03:46.789630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.233 [2024-11-10 00:03:48.045675] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 19632ea4-e5a3-440b-8241-9fb4c21d9664 already exists 00:30:24.233 [2024-11-10 00:03:48.045730] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:19632ea4-e5a3-440b-8241-9fb4c21d9664 alias for bdev NVMe1n1 00:30:24.233 [2024-11-10 00:03:48.045763] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:24.233 Running I/O for 1 seconds... 00:30:24.233 12955.00 IOPS, 50.61 MiB/s 00:30:24.233 Latency(us) 00:30:24.233 [2024-11-09T23:03:50.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.233 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:24.233 NVMe0n1 : 1.01 12964.12 50.64 0.00 0.00 9856.77 8398.32 21651.15 00:30:24.233 [2024-11-09T23:03:50.435Z] =================================================================================================================== 00:30:24.234 [2024-11-09T23:03:50.435Z] Total : 12964.12 50.64 0.00 0.00 9856.77 8398.32 21651.15 00:30:24.234 Received shutdown signal, test time was about 1.000000 seconds 00:30:24.234 00:30:24.234 Latency(us) 00:30:24.234 [2024-11-09T23:03:50.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.234 [2024-11-09T23:03:50.435Z] =================================================================================================================== 00:30:24.234 [2024-11-09T23:03:50.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.234 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.234 rmmod nvme_tcp 00:30:24.234 rmmod nvme_fabrics 00:30:24.234 rmmod nvme_keyring 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3565597 ']' 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3565597 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3565597 ']' 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3565597 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3565597 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3565597' 00:30:24.234 killing process with pid 3565597 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3565597 00:30:24.234 00:03:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3565597 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.610 00:03:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.515 00:03:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.515 00:30:27.515 real 0m10.985s 00:30:27.515 user 0m22.844s 00:30:27.516 sys 0m2.728s 00:30:27.516 00:03:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.516 00:03:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:27.516 ************************************ 00:30:27.516 END TEST nvmf_multicontroller 00:30:27.516 ************************************ 00:30:27.516 00:03:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:27.516 00:03:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:27.516 00:03:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:27.516 00:03:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.775 ************************************ 00:30:27.775 START TEST nvmf_aer 00:30:27.775 ************************************ 00:30:27.775 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:27.775 * Looking for test storage... 00:30:27.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.775 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.775 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.775 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:27.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.776 --rc genhtml_branch_coverage=1 00:30:27.776 --rc genhtml_function_coverage=1 00:30:27.776 --rc genhtml_legend=1 00:30:27.776 --rc geninfo_all_blocks=1 00:30:27.776 --rc geninfo_unexecuted_blocks=1 00:30:27.776 00:30:27.776 ' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:27.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.776 --rc genhtml_branch_coverage=1 00:30:27.776 --rc genhtml_function_coverage=1 00:30:27.776 --rc genhtml_legend=1 00:30:27.776 --rc geninfo_all_blocks=1 00:30:27.776 --rc geninfo_unexecuted_blocks=1 00:30:27.776 00:30:27.776 ' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:27.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.776 --rc genhtml_branch_coverage=1 00:30:27.776 --rc genhtml_function_coverage=1 00:30:27.776 --rc genhtml_legend=1 00:30:27.776 --rc geninfo_all_blocks=1 00:30:27.776 --rc geninfo_unexecuted_blocks=1 00:30:27.776 00:30:27.776 ' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:27.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.776 --rc genhtml_branch_coverage=1 00:30:27.776 --rc genhtml_function_coverage=1 00:30:27.776 --rc genhtml_legend=1 00:30:27.776 --rc geninfo_all_blocks=1 00:30:27.776 --rc geninfo_unexecuted_blocks=1 00:30:27.776 00:30:27.776 ' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.776 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.777 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.777 00:03:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:30.307 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:30.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:30.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:30.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:30.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.308 00:03:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:30.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:30:30.308 00:30:30.308 --- 10.0.0.2 ping statistics --- 00:30:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.308 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:30:30.308 00:30:30.308 --- 10.0.0.1 ping statistics --- 00:30:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.308 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.308 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3568369 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3568369 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3568369 ']' 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:30.309 00:03:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.309 [2024-11-10 00:03:56.225178] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:30.309 [2024-11-10 00:03:56.225309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.309 [2024-11-10 00:03:56.370603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.568 [2024-11-10 00:03:56.510266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.568 [2024-11-10 00:03:56.510335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.568 [2024-11-10 00:03:56.510360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.568 [2024-11-10 00:03:56.510383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.568 [2024-11-10 00:03:56.510404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.568 [2024-11-10 00:03:56.513352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.568 [2024-11-10 00:03:56.513422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.568 [2024-11-10 00:03:56.513519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.568 [2024-11-10 00:03:56.513525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.135 [2024-11-10 00:03:57.243430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.135 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.393 Malloc0 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.393 [2024-11-10 00:03:57.368018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.393 [ 00:30:31.393 { 00:30:31.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:31.393 "subtype": "Discovery", 00:30:31.393 "listen_addresses": [], 00:30:31.393 "allow_any_host": true, 00:30:31.393 "hosts": [] 00:30:31.393 }, 00:30:31.393 { 00:30:31.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.393 "subtype": "NVMe", 00:30:31.393 "listen_addresses": [ 00:30:31.393 { 00:30:31.393 "trtype": "TCP", 00:30:31.393 "adrfam": "IPv4", 00:30:31.393 "traddr": "10.0.0.2", 00:30:31.393 "trsvcid": "4420" 00:30:31.393 } 00:30:31.393 ], 00:30:31.393 "allow_any_host": true, 00:30:31.393 "hosts": [], 00:30:31.393 "serial_number": "SPDK00000000000001", 00:30:31.393 "model_number": "SPDK bdev Controller", 00:30:31.393 "max_namespaces": 2, 00:30:31.393 "min_cntlid": 1, 00:30:31.393 "max_cntlid": 65519, 00:30:31.393 "namespaces": [ 00:30:31.393 { 00:30:31.393 "nsid": 1, 00:30:31.393 "bdev_name": "Malloc0", 00:30:31.393 "name": "Malloc0", 00:30:31.393 "nguid": "3ADE3DF095C844F3B415A2E5887AF339", 00:30:31.393 "uuid": "3ade3df0-95c8-44f3-b415-a2e5887af339" 00:30:31.393 } 00:30:31.393 ] 00:30:31.393 } 00:30:31.393 ] 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3568526 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:30:31.393 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 3 -lt 200 ']' 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=4 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.651 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.909 Malloc1 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.909 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:31.909 [ 00:30:31.909 { 00:30:31.909 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:31.909 "subtype": "Discovery", 00:30:31.909 "listen_addresses": [], 00:30:31.909 "allow_any_host": true, 00:30:31.909 "hosts": [] 00:30:31.909 }, 00:30:31.909 { 00:30:31.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.909 "subtype": "NVMe", 00:30:31.909 "listen_addresses": [ 00:30:31.909 { 00:30:31.909 "trtype": "TCP", 00:30:31.909 "adrfam": "IPv4", 00:30:31.909 "traddr": "10.0.0.2", 00:30:31.909 "trsvcid": "4420" 00:30:31.909 } 00:30:31.909 ], 00:30:31.909 "allow_any_host": true, 00:30:31.909 "hosts": [], 00:30:31.909 "serial_number": "SPDK00000000000001", 00:30:31.909 "model_number": "SPDK bdev Controller", 00:30:31.909 "max_namespaces": 2, 00:30:31.909 "min_cntlid": 1, 00:30:31.909 "max_cntlid": 65519, 00:30:31.909 "namespaces": [ 00:30:31.909 { 00:30:31.909 "nsid": 1, 00:30:31.909 "bdev_name": "Malloc0", 00:30:31.909 "name": "Malloc0", 00:30:31.909 "nguid": "3ADE3DF095C844F3B415A2E5887AF339", 00:30:31.909 "uuid": "3ade3df0-95c8-44f3-b415-a2e5887af339" 00:30:31.909 }, 00:30:31.909 { 00:30:31.909 "nsid": 2, 00:30:31.910 "bdev_name": "Malloc1", 00:30:31.910 "name": "Malloc1", 00:30:31.910 "nguid": "37DF0BF63F944176BCF41B19CE473AAA", 00:30:31.910 "uuid": "37df0bf6-3f94-4176-bcf4-1b19ce473aaa" 00:30:31.910 } 00:30:31.910 ] 00:30:31.910 } 00:30:31.910 ] 00:30:31.910 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.910 00:03:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3568526 00:30:31.910 Asynchronous Event Request test 00:30:31.910 Attaching to 10.0.0.2 00:30:31.910 Attached to 10.0.0.2 00:30:31.910 Registering asynchronous event callbacks... 00:30:31.910 Starting namespace attribute notice tests for all controllers... 00:30:31.910 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:31.910 aer_cb - Changed Namespace 00:30:31.910 Cleaning up... 00:30:31.910 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:31.910 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.910 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.168 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.168 rmmod nvme_tcp 00:30:32.426 rmmod nvme_fabrics 00:30:32.426 rmmod nvme_keyring 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3568369 ']' 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3568369 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3568369 ']' 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3568369 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3568369 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3568369' 00:30:32.426 killing process with pid 3568369 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3568369 00:30:32.426 00:03:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3568369 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.366 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.623 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.623 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.623 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.623 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.623 00:03:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.581 00:30:35.581 real 0m7.878s 00:30:35.581 user 0m12.098s 00:30:35.581 sys 0m2.276s 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:35.581 ************************************ 00:30:35.581 END TEST nvmf_aer 00:30:35.581 ************************************ 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.581 ************************************ 00:30:35.581 START TEST nvmf_async_init 00:30:35.581 ************************************ 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:35.581 * Looking for test storage... 00:30:35.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:30:35.581 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:35.847 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:35.847 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.847 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.847 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:35.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.848 --rc genhtml_branch_coverage=1 00:30:35.848 --rc genhtml_function_coverage=1 00:30:35.848 --rc genhtml_legend=1 00:30:35.848 --rc geninfo_all_blocks=1 00:30:35.848 --rc geninfo_unexecuted_blocks=1 00:30:35.848 00:30:35.848 ' 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:35.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.848 --rc genhtml_branch_coverage=1 00:30:35.848 --rc genhtml_function_coverage=1 00:30:35.848 --rc genhtml_legend=1 00:30:35.848 --rc geninfo_all_blocks=1 00:30:35.848 --rc geninfo_unexecuted_blocks=1 00:30:35.848 00:30:35.848 ' 00:30:35.848 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:35.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.848 --rc genhtml_branch_coverage=1 00:30:35.848 --rc genhtml_function_coverage=1 00:30:35.849 --rc genhtml_legend=1 00:30:35.849 --rc geninfo_all_blocks=1 00:30:35.849 --rc geninfo_unexecuted_blocks=1 00:30:35.849 00:30:35.849 ' 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:35.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.849 --rc genhtml_branch_coverage=1 00:30:35.849 --rc genhtml_function_coverage=1 00:30:35.849 --rc genhtml_legend=1 00:30:35.849 --rc geninfo_all_blocks=1 00:30:35.849 --rc geninfo_unexecuted_blocks=1 00:30:35.849 00:30:35.849 ' 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.849 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:35.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1a9f070f8ea34177a72f19d1d67bb854 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.850 00:04:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:37.755 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:37.755 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:37.755 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:37.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.755 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.756 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:30:38.014 00:30:38.014 --- 10.0.0.2 ping statistics --- 00:30:38.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.014 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:30:38.014 00:30:38.014 --- 10.0.0.1 ping statistics --- 00:30:38.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.014 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3570727 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3570727 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3570727 ']' 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:38.014 00:04:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:38.014 [2024-11-10 00:04:04.086880] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:38.014 [2024-11-10 00:04:04.087029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.272 [2024-11-10 00:04:04.230116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.272 [2024-11-10 00:04:04.349303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.272 [2024-11-10 00:04:04.349373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.272 [2024-11-10 00:04:04.349394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.272 [2024-11-10 00:04:04.349414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.272 [2024-11-10 00:04:04.349430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.272 [2024-11-10 00:04:04.350840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 [2024-11-10 00:04:05.101900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 null0 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1a9f070f8ea34177a72f19d1d67bb854 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 [2024-11-10 00:04:05.142145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 nvme0n1 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.206 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 [ 00:30:39.206 { 00:30:39.206 "name": "nvme0n1", 00:30:39.206 "aliases": [ 00:30:39.206 "1a9f070f-8ea3-4177-a72f-19d1d67bb854" 00:30:39.207 ], 00:30:39.207 "product_name": "NVMe disk", 00:30:39.207 "block_size": 512, 00:30:39.207 "num_blocks": 2097152, 00:30:39.207 "uuid": "1a9f070f-8ea3-4177-a72f-19d1d67bb854", 00:30:39.207 "numa_id": 0, 00:30:39.207 "assigned_rate_limits": { 00:30:39.207 "rw_ios_per_sec": 0, 00:30:39.207 "rw_mbytes_per_sec": 0, 00:30:39.207 "r_mbytes_per_sec": 0, 00:30:39.207 "w_mbytes_per_sec": 0 00:30:39.207 }, 00:30:39.207 "claimed": false, 00:30:39.207 "zoned": false, 00:30:39.207 "supported_io_types": { 00:30:39.207 "read": true, 00:30:39.207 "write": true, 00:30:39.207 "unmap": false, 00:30:39.207 "flush": true, 00:30:39.207 "reset": true, 00:30:39.207 "nvme_admin": true, 00:30:39.207 "nvme_io": true, 00:30:39.207 "nvme_io_md": false, 00:30:39.207 "write_zeroes": true, 00:30:39.207 "zcopy": false, 00:30:39.207 "get_zone_info": false, 00:30:39.207 "zone_management": false, 00:30:39.207 "zone_append": false, 00:30:39.207 "compare": true, 00:30:39.207 "compare_and_write": true, 00:30:39.207 "abort": true, 00:30:39.207 "seek_hole": false, 00:30:39.207 "seek_data": false, 00:30:39.207 "copy": true, 00:30:39.207 "nvme_iov_md": false 00:30:39.207 }, 00:30:39.207 "memory_domains": [ 00:30:39.207 { 00:30:39.207 "dma_device_id": "system", 00:30:39.207 "dma_device_type": 1 00:30:39.207 } 00:30:39.207 ], 00:30:39.207 "driver_specific": { 00:30:39.207 "nvme": [ 00:30:39.207 { 00:30:39.207 "trid": { 00:30:39.207 "trtype": "TCP", 00:30:39.207 "adrfam": "IPv4", 00:30:39.207 "traddr": "10.0.0.2", 00:30:39.207 "trsvcid": "4420", 00:30:39.207 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:39.207 }, 00:30:39.207 "ctrlr_data": { 00:30:39.207 "cntlid": 1, 00:30:39.207 "vendor_id": "0x8086", 00:30:39.207 "model_number": "SPDK bdev Controller", 00:30:39.207 "serial_number": "00000000000000000000", 00:30:39.207 "firmware_revision": "25.01", 00:30:39.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.207 "oacs": { 00:30:39.207 "security": 0, 00:30:39.207 "format": 0, 00:30:39.207 "firmware": 0, 00:30:39.207 "ns_manage": 0 00:30:39.207 }, 00:30:39.207 "multi_ctrlr": true, 00:30:39.207 "ana_reporting": false 00:30:39.207 }, 00:30:39.207 "vs": { 00:30:39.207 "nvme_version": "1.3" 00:30:39.207 }, 00:30:39.207 "ns_data": { 00:30:39.207 "id": 1, 00:30:39.207 "can_share": true 00:30:39.207 } 00:30:39.207 } 00:30:39.207 ], 00:30:39.207 "mp_policy": "active_passive" 00:30:39.207 } 00:30:39.207 } 00:30:39.207 ] 00:30:39.207 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.207 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:39.207 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.207 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.207 [2024-11-10 00:04:05.399017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:39.207 [2024-11-10 00:04:05.399142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:39.465 [2024-11-10 00:04:05.531810] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:39.465 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.465 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:39.465 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.465 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.465 [ 00:30:39.465 { 00:30:39.465 "name": "nvme0n1", 00:30:39.466 "aliases": [ 00:30:39.466 "1a9f070f-8ea3-4177-a72f-19d1d67bb854" 00:30:39.466 ], 00:30:39.466 "product_name": "NVMe disk", 00:30:39.466 "block_size": 512, 00:30:39.466 "num_blocks": 2097152, 00:30:39.466 "uuid": "1a9f070f-8ea3-4177-a72f-19d1d67bb854", 00:30:39.466 "numa_id": 0, 00:30:39.466 "assigned_rate_limits": { 00:30:39.466 "rw_ios_per_sec": 0, 00:30:39.466 "rw_mbytes_per_sec": 0, 00:30:39.466 "r_mbytes_per_sec": 0, 00:30:39.466 "w_mbytes_per_sec": 0 00:30:39.466 }, 00:30:39.466 "claimed": false, 00:30:39.466 "zoned": false, 00:30:39.466 "supported_io_types": { 00:30:39.466 "read": true, 00:30:39.466 "write": true, 00:30:39.466 "unmap": false, 00:30:39.466 "flush": true, 00:30:39.466 "reset": true, 00:30:39.466 "nvme_admin": true, 00:30:39.466 "nvme_io": true, 00:30:39.466 "nvme_io_md": false, 00:30:39.466 "write_zeroes": true, 00:30:39.466 "zcopy": false, 00:30:39.466 "get_zone_info": false, 00:30:39.466 "zone_management": false, 00:30:39.466 "zone_append": false, 00:30:39.466 "compare": true, 00:30:39.466 "compare_and_write": true, 00:30:39.466 "abort": true, 00:30:39.466 "seek_hole": false, 00:30:39.466 "seek_data": false, 00:30:39.466 "copy": true, 00:30:39.466 "nvme_iov_md": false 00:30:39.466 }, 00:30:39.466 "memory_domains": [ 00:30:39.466 { 00:30:39.466 "dma_device_id": "system", 00:30:39.466 "dma_device_type": 1 00:30:39.466 } 00:30:39.466 ], 00:30:39.466 "driver_specific": { 00:30:39.466 "nvme": [ 00:30:39.466 { 00:30:39.466 "trid": { 00:30:39.466 "trtype": "TCP", 00:30:39.466 "adrfam": "IPv4", 00:30:39.466 "traddr": "10.0.0.2", 00:30:39.466 "trsvcid": "4420", 00:30:39.466 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:39.466 }, 00:30:39.466 "ctrlr_data": { 00:30:39.466 "cntlid": 2, 00:30:39.466 "vendor_id": "0x8086", 00:30:39.466 "model_number": "SPDK bdev Controller", 00:30:39.466 "serial_number": "00000000000000000000", 00:30:39.466 "firmware_revision": "25.01", 00:30:39.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.466 "oacs": { 00:30:39.466 "security": 0, 00:30:39.466 "format": 0, 00:30:39.466 "firmware": 0, 00:30:39.466 "ns_manage": 0 00:30:39.466 }, 00:30:39.466 "multi_ctrlr": true, 00:30:39.466 "ana_reporting": false 00:30:39.466 }, 00:30:39.466 "vs": { 00:30:39.466 "nvme_version": "1.3" 00:30:39.466 }, 00:30:39.466 "ns_data": { 00:30:39.466 "id": 1, 00:30:39.466 "can_share": true 00:30:39.466 } 00:30:39.466 } 00:30:39.466 ], 00:30:39.466 "mp_policy": "active_passive" 00:30:39.466 } 00:30:39.466 } 00:30:39.466 ] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ttmPSCDOHP 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ttmPSCDOHP 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ttmPSCDOHP 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.466 [2024-11-10 00:04:05.591779] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:39.466 [2024-11-10 00:04:05.592014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.466 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.466 [2024-11-10 00:04:05.607789] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:39.724 nvme0n1 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.724 [ 00:30:39.724 { 00:30:39.724 "name": "nvme0n1", 00:30:39.724 "aliases": [ 00:30:39.724 "1a9f070f-8ea3-4177-a72f-19d1d67bb854" 00:30:39.724 ], 00:30:39.724 "product_name": "NVMe disk", 00:30:39.724 "block_size": 512, 00:30:39.724 "num_blocks": 2097152, 00:30:39.724 "uuid": "1a9f070f-8ea3-4177-a72f-19d1d67bb854", 00:30:39.724 "numa_id": 0, 00:30:39.724 "assigned_rate_limits": { 00:30:39.724 "rw_ios_per_sec": 0, 00:30:39.724 "rw_mbytes_per_sec": 0, 00:30:39.724 "r_mbytes_per_sec": 0, 00:30:39.724 "w_mbytes_per_sec": 0 00:30:39.724 }, 00:30:39.724 "claimed": false, 00:30:39.724 "zoned": false, 00:30:39.724 "supported_io_types": { 00:30:39.724 "read": true, 00:30:39.724 "write": true, 00:30:39.724 "unmap": false, 00:30:39.724 "flush": true, 00:30:39.724 "reset": true, 00:30:39.724 "nvme_admin": true, 00:30:39.724 "nvme_io": true, 00:30:39.724 "nvme_io_md": false, 00:30:39.724 "write_zeroes": true, 00:30:39.724 "zcopy": false, 00:30:39.724 "get_zone_info": false, 00:30:39.724 "zone_management": false, 00:30:39.724 "zone_append": false, 00:30:39.724 "compare": true, 00:30:39.724 "compare_and_write": true, 00:30:39.724 "abort": true, 00:30:39.724 "seek_hole": false, 00:30:39.724 "seek_data": false, 00:30:39.724 "copy": true, 00:30:39.724 "nvme_iov_md": false 00:30:39.724 }, 00:30:39.724 "memory_domains": [ 00:30:39.724 { 00:30:39.724 "dma_device_id": "system", 00:30:39.724 "dma_device_type": 1 00:30:39.724 } 00:30:39.724 ], 00:30:39.724 "driver_specific": { 00:30:39.724 "nvme": [ 00:30:39.724 { 00:30:39.724 "trid": { 00:30:39.724 "trtype": "TCP", 00:30:39.724 "adrfam": "IPv4", 00:30:39.724 "traddr": "10.0.0.2", 00:30:39.724 "trsvcid": "4421", 00:30:39.724 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:39.724 }, 00:30:39.724 "ctrlr_data": { 00:30:39.724 "cntlid": 3, 00:30:39.724 "vendor_id": "0x8086", 00:30:39.724 "model_number": "SPDK bdev Controller", 00:30:39.724 "serial_number": "00000000000000000000", 00:30:39.724 "firmware_revision": "25.01", 00:30:39.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.724 "oacs": { 00:30:39.724 "security": 0, 00:30:39.724 "format": 0, 00:30:39.724 "firmware": 0, 00:30:39.724 "ns_manage": 0 00:30:39.724 }, 00:30:39.724 "multi_ctrlr": true, 00:30:39.724 "ana_reporting": false 00:30:39.724 }, 00:30:39.724 "vs": { 00:30:39.724 "nvme_version": "1.3" 00:30:39.724 }, 00:30:39.724 "ns_data": { 00:30:39.724 "id": 1, 00:30:39.724 "can_share": true 00:30:39.724 } 00:30:39.724 } 00:30:39.724 ], 00:30:39.724 "mp_policy": "active_passive" 00:30:39.724 } 00:30:39.724 } 00:30:39.724 ] 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ttmPSCDOHP 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.724 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.725 rmmod nvme_tcp 00:30:39.725 rmmod nvme_fabrics 00:30:39.725 rmmod nvme_keyring 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3570727 ']' 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3570727 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3570727 ']' 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3570727 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3570727 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3570727' 00:30:39.725 killing process with pid 3570727 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3570727 00:30:39.725 00:04:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3570727 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.101 00:04:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.006 00:04:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.006 00:30:43.006 real 0m7.335s 00:30:43.006 user 0m4.049s 00:30:43.006 sys 0m1.991s 00:30:43.006 00:04:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:43.006 00:04:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:43.006 ************************************ 00:30:43.006 END TEST nvmf_async_init 00:30:43.006 ************************************ 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.006 ************************************ 00:30:43.006 START TEST dma 00:30:43.006 ************************************ 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:43.006 * Looking for test storage... 00:30:43.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.006 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.007 --rc genhtml_branch_coverage=1 00:30:43.007 --rc genhtml_function_coverage=1 00:30:43.007 --rc genhtml_legend=1 00:30:43.007 --rc geninfo_all_blocks=1 00:30:43.007 --rc geninfo_unexecuted_blocks=1 00:30:43.007 00:30:43.007 ' 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.007 --rc genhtml_branch_coverage=1 00:30:43.007 --rc genhtml_function_coverage=1 00:30:43.007 --rc genhtml_legend=1 00:30:43.007 --rc geninfo_all_blocks=1 00:30:43.007 --rc geninfo_unexecuted_blocks=1 00:30:43.007 00:30:43.007 ' 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.007 --rc genhtml_branch_coverage=1 00:30:43.007 --rc genhtml_function_coverage=1 00:30:43.007 --rc genhtml_legend=1 00:30:43.007 --rc geninfo_all_blocks=1 00:30:43.007 --rc geninfo_unexecuted_blocks=1 00:30:43.007 00:30:43.007 ' 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.007 --rc genhtml_branch_coverage=1 00:30:43.007 --rc genhtml_function_coverage=1 00:30:43.007 --rc genhtml_legend=1 00:30:43.007 --rc geninfo_all_blocks=1 00:30:43.007 --rc geninfo_unexecuted_blocks=1 00:30:43.007 00:30:43.007 ' 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.007 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:43.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:43.266 00:30:43.266 real 0m0.176s 00:30:43.266 user 0m0.118s 00:30:43.266 sys 0m0.066s 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:43.266 ************************************ 00:30:43.266 END TEST dma 00:30:43.266 ************************************ 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.266 ************************************ 00:30:43.266 START TEST nvmf_identify 00:30:43.266 ************************************ 00:30:43.266 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:43.266 * Looking for test storage... 00:30:43.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.267 --rc genhtml_branch_coverage=1 00:30:43.267 --rc genhtml_function_coverage=1 00:30:43.267 --rc genhtml_legend=1 00:30:43.267 --rc geninfo_all_blocks=1 00:30:43.267 --rc geninfo_unexecuted_blocks=1 00:30:43.267 00:30:43.267 ' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.267 --rc genhtml_branch_coverage=1 00:30:43.267 --rc genhtml_function_coverage=1 00:30:43.267 --rc genhtml_legend=1 00:30:43.267 --rc geninfo_all_blocks=1 00:30:43.267 --rc geninfo_unexecuted_blocks=1 00:30:43.267 00:30:43.267 ' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.267 --rc genhtml_branch_coverage=1 00:30:43.267 --rc genhtml_function_coverage=1 00:30:43.267 --rc genhtml_legend=1 00:30:43.267 --rc geninfo_all_blocks=1 00:30:43.267 --rc geninfo_unexecuted_blocks=1 00:30:43.267 00:30:43.267 ' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:43.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.267 --rc genhtml_branch_coverage=1 00:30:43.267 --rc genhtml_function_coverage=1 00:30:43.267 --rc genhtml_legend=1 00:30:43.267 --rc geninfo_all_blocks=1 00:30:43.267 --rc geninfo_unexecuted_blocks=1 00:30:43.267 00:30:43.267 ' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:43.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.267 00:04:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:45.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:45.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:45.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:45.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.801 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:30:45.802 00:30:45.802 --- 10.0.0.2 ping statistics --- 00:30:45.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.802 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:30:45.802 00:30:45.802 --- 10.0.0.1 ping statistics --- 00:30:45.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.802 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3573046 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3573046 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3573046 ']' 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:45.802 00:04:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:45.802 [2024-11-10 00:04:11.676752] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:45.802 [2024-11-10 00:04:11.676893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.802 [2024-11-10 00:04:11.827758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.802 [2024-11-10 00:04:11.965922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.802 [2024-11-10 00:04:11.965997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.802 [2024-11-10 00:04:11.966022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.802 [2024-11-10 00:04:11.966046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.802 [2024-11-10 00:04:11.966065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.802 [2024-11-10 00:04:11.968798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.802 [2024-11-10 00:04:11.968883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.802 [2024-11-10 00:04:11.968913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.802 [2024-11-10 00:04:11.968918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.735 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 [2024-11-10 00:04:12.644774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 Malloc0 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 [2024-11-10 00:04:12.777124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.736 [ 00:30:46.736 { 00:30:46.736 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:46.736 "subtype": "Discovery", 00:30:46.736 "listen_addresses": [ 00:30:46.736 { 00:30:46.736 "trtype": "TCP", 00:30:46.736 "adrfam": "IPv4", 00:30:46.736 "traddr": "10.0.0.2", 00:30:46.736 "trsvcid": "4420" 00:30:46.736 } 00:30:46.736 ], 00:30:46.736 "allow_any_host": true, 00:30:46.736 "hosts": [] 00:30:46.736 }, 00:30:46.736 { 00:30:46.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.736 "subtype": "NVMe", 00:30:46.736 "listen_addresses": [ 00:30:46.736 { 00:30:46.736 "trtype": "TCP", 00:30:46.736 "adrfam": "IPv4", 00:30:46.736 "traddr": "10.0.0.2", 00:30:46.736 "trsvcid": "4420" 00:30:46.736 } 00:30:46.736 ], 00:30:46.736 "allow_any_host": true, 00:30:46.736 "hosts": [], 00:30:46.736 "serial_number": "SPDK00000000000001", 00:30:46.736 "model_number": "SPDK bdev Controller", 00:30:46.736 "max_namespaces": 32, 00:30:46.736 "min_cntlid": 1, 00:30:46.736 "max_cntlid": 65519, 00:30:46.736 "namespaces": [ 00:30:46.736 { 00:30:46.736 "nsid": 1, 00:30:46.736 "bdev_name": "Malloc0", 00:30:46.736 "name": "Malloc0", 00:30:46.736 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:46.736 "eui64": "ABCDEF0123456789", 00:30:46.736 "uuid": "90a78488-bbd0-47ea-852a-87020bcab333" 00:30:46.736 } 00:30:46.736 ] 00:30:46.736 } 00:30:46.736 ] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.736 00:04:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:46.736 [2024-11-10 00:04:12.840051] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:46.736 [2024-11-10 00:04:12.840147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573283 ] 00:30:46.736 [2024-11-10 00:04:12.916039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:46.736 [2024-11-10 00:04:12.916174] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:46.736 [2024-11-10 00:04:12.916197] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:46.736 [2024-11-10 00:04:12.916231] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:46.736 [2024-11-10 00:04:12.916256] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:46.736 [2024-11-10 00:04:12.920188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:46.736 [2024-11-10 00:04:12.920284] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:46.736 [2024-11-10 00:04:12.927619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:46.736 [2024-11-10 00:04:12.927660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:46.736 [2024-11-10 00:04:12.927678] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:46.736 [2024-11-10 00:04:12.927689] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:46.736 [2024-11-10 00:04:12.927782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.927804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.927818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.736 [2024-11-10 00:04:12.927860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:46.736 [2024-11-10 00:04:12.927902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.736 [2024-11-10 00:04:12.935630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.736 [2024-11-10 00:04:12.935667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.736 [2024-11-10 00:04:12.935682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.935697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.736 [2024-11-10 00:04:12.935728] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:46.736 [2024-11-10 00:04:12.935751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:46.736 [2024-11-10 00:04:12.935779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:46.736 [2024-11-10 00:04:12.935816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.935833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.935844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.736 [2024-11-10 00:04:12.935866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.736 [2024-11-10 00:04:12.935916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.736 [2024-11-10 00:04:12.936127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.736 [2024-11-10 00:04:12.936150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.736 [2024-11-10 00:04:12.936163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.936176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.736 [2024-11-10 00:04:12.936213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:46.736 [2024-11-10 00:04:12.936237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:46.736 [2024-11-10 00:04:12.936259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.936273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.736 [2024-11-10 00:04:12.936285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.736 [2024-11-10 00:04:12.936310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.737 [2024-11-10 00:04:12.936345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.737 [2024-11-10 00:04:12.936484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.737 [2024-11-10 00:04:12.936506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.737 [2024-11-10 00:04:12.936517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.737 [2024-11-10 00:04:12.936529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.737 [2024-11-10 00:04:12.936545] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:46.737 [2024-11-10 00:04:12.936570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:46.737 [2024-11-10 00:04:12.936607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.936627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.936641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.997 [2024-11-10 00:04:12.936662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.997 [2024-11-10 00:04:12.936696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.997 [2024-11-10 00:04:12.936800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.997 [2024-11-10 00:04:12.936822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.997 [2024-11-10 00:04:12.936839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.936852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.997 [2024-11-10 00:04:12.936868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:46.997 [2024-11-10 00:04:12.936896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.936913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.936924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.997 [2024-11-10 00:04:12.936944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.997 [2024-11-10 00:04:12.936988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.997 [2024-11-10 00:04:12.937132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.997 [2024-11-10 00:04:12.937153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.997 [2024-11-10 00:04:12.937165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.937176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.997 [2024-11-10 00:04:12.937192] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:46.997 [2024-11-10 00:04:12.937207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:46.997 [2024-11-10 00:04:12.937234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:46.997 [2024-11-10 00:04:12.937355] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:46.997 [2024-11-10 00:04:12.937370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:46.997 [2024-11-10 00:04:12.937396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.937410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.937425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.997 [2024-11-10 00:04:12.937445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.997 [2024-11-10 00:04:12.937483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.997 [2024-11-10 00:04:12.937654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.997 [2024-11-10 00:04:12.937677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.997 [2024-11-10 00:04:12.937688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.937699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.997 [2024-11-10 00:04:12.937715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:46.997 [2024-11-10 00:04:12.937743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.937764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.997 [2024-11-10 00:04:12.937778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.997 [2024-11-10 00:04:12.937798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.998 [2024-11-10 00:04:12.937830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.998 [2024-11-10 00:04:12.937944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.998 [2024-11-10 00:04:12.937974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.998 [2024-11-10 00:04:12.938004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.938022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.998 [2024-11-10 00:04:12.938045] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:46.998 [2024-11-10 00:04:12.938067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:46.998 [2024-11-10 00:04:12.938099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:46.998 [2024-11-10 00:04:12.938135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:46.998 [2024-11-10 00:04:12.938181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.938206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.938254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.998 [2024-11-10 00:04:12.938323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.998 [2024-11-10 00:04:12.938526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.998 [2024-11-10 00:04:12.938559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.998 [2024-11-10 00:04:12.938580] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.938615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:46.998 [2024-11-10 00:04:12.938643] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.998 [2024-11-10 00:04:12.938667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.938719] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.938743] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.978695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.998 [2024-11-10 00:04:12.978728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.998 [2024-11-10 00:04:12.978742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.978755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.998 [2024-11-10 00:04:12.978783] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:46.998 [2024-11-10 00:04:12.978801] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:46.998 [2024-11-10 00:04:12.978815] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:46.998 [2024-11-10 00:04:12.978836] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:46.998 [2024-11-10 00:04:12.978851] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:46.998 [2024-11-10 00:04:12.978865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:46.998 [2024-11-10 00:04:12.978894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:46.998 [2024-11-10 00:04:12.978924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.978941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.978976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.979004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.998 [2024-11-10 00:04:12.979041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.998 [2024-11-10 00:04:12.979192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.998 [2024-11-10 00:04:12.979215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.998 [2024-11-10 00:04:12.979226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:46.998 [2024-11-10 00:04:12.979260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.979305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.998 [2024-11-10 00:04:12.979323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.979382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.998 [2024-11-10 00:04:12.979401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.979438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.998 [2024-11-10 00:04:12.979454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.979481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.979511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.998 [2024-11-10 00:04:12.979526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:46.998 [2024-11-10 00:04:12.979553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:46.998 [2024-11-10 00:04:12.983622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.983647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.983667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.998 [2024-11-10 00:04:12.983704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:46.998 [2024-11-10 00:04:12.983738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:46.998 [2024-11-10 00:04:12.983751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:46.998 [2024-11-10 00:04:12.983763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:46.998 [2024-11-10 00:04:12.983775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.998 [2024-11-10 00:04:12.983932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.998 [2024-11-10 00:04:12.983953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.998 [2024-11-10 00:04:12.983965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.983977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.998 [2024-11-10 00:04:12.983994] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:46.998 [2024-11-10 00:04:12.984026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:46.998 [2024-11-10 00:04:12.984065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.984082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.998 [2024-11-10 00:04:12.984102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.998 [2024-11-10 00:04:12.984134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.998 [2024-11-10 00:04:12.984305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.998 [2024-11-10 00:04:12.984328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.998 [2024-11-10 00:04:12.984348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.984360] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:46.998 [2024-11-10 00:04:12.984373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:46.998 [2024-11-10 00:04:12.984386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.984419] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.998 [2024-11-10 00:04:12.984436] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.984460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.999 [2024-11-10 00:04:12.984484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.999 [2024-11-10 00:04:12.984496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.984508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.999 [2024-11-10 00:04:12.984546] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:46.999 [2024-11-10 00:04:12.984632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.984651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.999 [2024-11-10 00:04:12.984673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.999 [2024-11-10 00:04:12.984693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.984711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.984725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:46.999 [2024-11-10 00:04:12.984743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.999 [2024-11-10 00:04:12.984777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.999 [2024-11-10 00:04:12.984796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:46.999 [2024-11-10 00:04:12.985050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.999 [2024-11-10 00:04:12.985089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.999 [2024-11-10 00:04:12.985101] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.985119] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:46.999 [2024-11-10 00:04:12.985133] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:46.999 [2024-11-10 00:04:12.985145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.985167] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.985181] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.985196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.999 [2024-11-10 00:04:12.985212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.999 [2024-11-10 00:04:12.985222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:12.985234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:46.999 [2024-11-10 00:04:13.025715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.999 [2024-11-10 00:04:13.025744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.999 [2024-11-10 00:04:13.025757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.025769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.999 [2024-11-10 00:04:13.025817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.025837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.999 [2024-11-10 00:04:13.025861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.999 [2024-11-10 00:04:13.025911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.999 [2024-11-10 00:04:13.026072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.999 [2024-11-10 00:04:13.026094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.999 [2024-11-10 00:04:13.026111] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.026122] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:46.999 [2024-11-10 00:04:13.026134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:46.999 [2024-11-10 00:04:13.026146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.026178] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.026194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.066680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.999 [2024-11-10 00:04:13.066708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.999 [2024-11-10 00:04:13.066734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.066746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.999 [2024-11-10 00:04:13.066778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.066795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:46.999 [2024-11-10 00:04:13.066817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.999 [2024-11-10 00:04:13.066861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:46.999 [2024-11-10 00:04:13.067029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:46.999 [2024-11-10 00:04:13.067056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:46.999 [2024-11-10 00:04:13.067069] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.067080] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:46.999 [2024-11-10 00:04:13.067092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:46.999 [2024-11-10 00:04:13.067103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.067125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.067153] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.111634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:46.999 [2024-11-10 00:04:13.111662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:46.999 [2024-11-10 00:04:13.111675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:46.999 [2024-11-10 00:04:13.111686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:46.999 ===================================================== 00:30:46.999 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:46.999 ===================================================== 00:30:46.999 Controller Capabilities/Features 00:30:46.999 ================================ 00:30:46.999 Vendor ID: 0000 00:30:46.999 Subsystem Vendor ID: 0000 00:30:46.999 Serial Number: .................... 00:30:46.999 Model Number: ........................................ 00:30:46.999 Firmware Version: 25.01 00:30:46.999 Recommended Arb Burst: 0 00:30:46.999 IEEE OUI Identifier: 00 00 00 00:30:46.999 Multi-path I/O 00:30:46.999 May have multiple subsystem ports: No 00:30:46.999 May have multiple controllers: No 00:30:46.999 Associated with SR-IOV VF: No 00:30:46.999 Max Data Transfer Size: 131072 00:30:46.999 Max Number of Namespaces: 0 00:30:46.999 Max Number of I/O Queues: 1024 00:30:46.999 NVMe Specification Version (VS): 1.3 00:30:46.999 NVMe Specification Version (Identify): 1.3 00:30:46.999 Maximum Queue Entries: 128 00:30:46.999 Contiguous Queues Required: Yes 00:30:46.999 Arbitration Mechanisms Supported 00:30:46.999 Weighted Round Robin: Not Supported 00:30:46.999 Vendor Specific: Not Supported 00:30:46.999 Reset Timeout: 15000 ms 00:30:46.999 Doorbell Stride: 4 bytes 00:30:46.999 NVM Subsystem Reset: Not Supported 00:30:46.999 Command Sets Supported 00:30:46.999 NVM Command Set: Supported 00:30:46.999 Boot Partition: Not Supported 00:30:46.999 Memory Page Size Minimum: 4096 bytes 00:30:46.999 Memory Page Size Maximum: 4096 bytes 00:30:46.999 Persistent Memory Region: Not Supported 00:30:46.999 Optional Asynchronous Events Supported 00:30:46.999 Namespace Attribute Notices: Not Supported 00:30:46.999 Firmware Activation Notices: Not Supported 00:30:46.999 ANA Change Notices: Not Supported 00:30:46.999 PLE Aggregate Log Change Notices: Not Supported 00:30:46.999 LBA Status Info Alert Notices: Not Supported 00:30:46.999 EGE Aggregate Log Change Notices: Not Supported 00:30:46.999 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.999 Zone Descriptor Change Notices: Not Supported 00:30:46.999 Discovery Log Change Notices: Supported 00:30:46.999 Controller Attributes 00:30:46.999 128-bit Host Identifier: Not Supported 00:30:46.999 Non-Operational Permissive Mode: Not Supported 00:30:46.999 NVM Sets: Not Supported 00:30:46.999 Read Recovery Levels: Not Supported 00:30:46.999 Endurance Groups: Not Supported 00:30:46.999 Predictable Latency Mode: Not Supported 00:30:46.999 Traffic Based Keep ALive: Not Supported 00:30:46.999 Namespace Granularity: Not Supported 00:30:47.000 SQ Associations: Not Supported 00:30:47.000 UUID List: Not Supported 00:30:47.000 Multi-Domain Subsystem: Not Supported 00:30:47.000 Fixed Capacity Management: Not Supported 00:30:47.000 Variable Capacity Management: Not Supported 00:30:47.000 Delete Endurance Group: Not Supported 00:30:47.000 Delete NVM Set: Not Supported 00:30:47.000 Extended LBA Formats Supported: Not Supported 00:30:47.000 Flexible Data Placement Supported: Not Supported 00:30:47.000 00:30:47.000 Controller Memory Buffer Support 00:30:47.000 ================================ 00:30:47.000 Supported: No 00:30:47.000 00:30:47.000 Persistent Memory Region Support 00:30:47.000 ================================ 00:30:47.000 Supported: No 00:30:47.000 00:30:47.000 Admin Command Set Attributes 00:30:47.000 ============================ 00:30:47.000 Security Send/Receive: Not Supported 00:30:47.000 Format NVM: Not Supported 00:30:47.000 Firmware Activate/Download: Not Supported 00:30:47.000 Namespace Management: Not Supported 00:30:47.000 Device Self-Test: Not Supported 00:30:47.000 Directives: Not Supported 00:30:47.000 NVMe-MI: Not Supported 00:30:47.000 Virtualization Management: Not Supported 00:30:47.000 Doorbell Buffer Config: Not Supported 00:30:47.000 Get LBA Status Capability: Not Supported 00:30:47.000 Command & Feature Lockdown Capability: Not Supported 00:30:47.000 Abort Command Limit: 1 00:30:47.000 Async Event Request Limit: 4 00:30:47.000 Number of Firmware Slots: N/A 00:30:47.000 Firmware Slot 1 Read-Only: N/A 00:30:47.000 Firmware Activation Without Reset: N/A 00:30:47.000 Multiple Update Detection Support: N/A 00:30:47.000 Firmware Update Granularity: No Information Provided 00:30:47.000 Per-Namespace SMART Log: No 00:30:47.000 Asymmetric Namespace Access Log Page: Not Supported 00:30:47.000 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:47.000 Command Effects Log Page: Not Supported 00:30:47.000 Get Log Page Extended Data: Supported 00:30:47.000 Telemetry Log Pages: Not Supported 00:30:47.000 Persistent Event Log Pages: Not Supported 00:30:47.000 Supported Log Pages Log Page: May Support 00:30:47.000 Commands Supported & Effects Log Page: Not Supported 00:30:47.000 Feature Identifiers & Effects Log Page:May Support 00:30:47.000 NVMe-MI Commands & Effects Log Page: May Support 00:30:47.000 Data Area 4 for Telemetry Log: Not Supported 00:30:47.000 Error Log Page Entries Supported: 128 00:30:47.000 Keep Alive: Not Supported 00:30:47.000 00:30:47.000 NVM Command Set Attributes 00:30:47.000 ========================== 00:30:47.000 Submission Queue Entry Size 00:30:47.000 Max: 1 00:30:47.000 Min: 1 00:30:47.000 Completion Queue Entry Size 00:30:47.000 Max: 1 00:30:47.000 Min: 1 00:30:47.000 Number of Namespaces: 0 00:30:47.000 Compare Command: Not Supported 00:30:47.000 Write Uncorrectable Command: Not Supported 00:30:47.000 Dataset Management Command: Not Supported 00:30:47.000 Write Zeroes Command: Not Supported 00:30:47.000 Set Features Save Field: Not Supported 00:30:47.000 Reservations: Not Supported 00:30:47.000 Timestamp: Not Supported 00:30:47.000 Copy: Not Supported 00:30:47.000 Volatile Write Cache: Not Present 00:30:47.000 Atomic Write Unit (Normal): 1 00:30:47.000 Atomic Write Unit (PFail): 1 00:30:47.000 Atomic Compare & Write Unit: 1 00:30:47.000 Fused Compare & Write: Supported 00:30:47.000 Scatter-Gather List 00:30:47.000 SGL Command Set: Supported 00:30:47.000 SGL Keyed: Supported 00:30:47.000 SGL Bit Bucket Descriptor: Not Supported 00:30:47.000 SGL Metadata Pointer: Not Supported 00:30:47.000 Oversized SGL: Not Supported 00:30:47.000 SGL Metadata Address: Not Supported 00:30:47.000 SGL Offset: Supported 00:30:47.000 Transport SGL Data Block: Not Supported 00:30:47.000 Replay Protected Memory Block: Not Supported 00:30:47.000 00:30:47.000 Firmware Slot Information 00:30:47.000 ========================= 00:30:47.000 Active slot: 0 00:30:47.000 00:30:47.000 00:30:47.000 Error Log 00:30:47.000 ========= 00:30:47.000 00:30:47.000 Active Namespaces 00:30:47.000 ================= 00:30:47.000 Discovery Log Page 00:30:47.000 ================== 00:30:47.000 Generation Counter: 2 00:30:47.000 Number of Records: 2 00:30:47.000 Record Format: 0 00:30:47.000 00:30:47.000 Discovery Log Entry 0 00:30:47.000 ---------------------- 00:30:47.000 Transport Type: 3 (TCP) 00:30:47.000 Address Family: 1 (IPv4) 00:30:47.000 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:47.000 Entry Flags: 00:30:47.000 Duplicate Returned Information: 1 00:30:47.000 Explicit Persistent Connection Support for Discovery: 1 00:30:47.000 Transport Requirements: 00:30:47.000 Secure Channel: Not Required 00:30:47.000 Port ID: 0 (0x0000) 00:30:47.000 Controller ID: 65535 (0xffff) 00:30:47.000 Admin Max SQ Size: 128 00:30:47.000 Transport Service Identifier: 4420 00:30:47.000 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:47.000 Transport Address: 10.0.0.2 00:30:47.000 Discovery Log Entry 1 00:30:47.000 ---------------------- 00:30:47.000 Transport Type: 3 (TCP) 00:30:47.000 Address Family: 1 (IPv4) 00:30:47.000 Subsystem Type: 2 (NVM Subsystem) 00:30:47.000 Entry Flags: 00:30:47.000 Duplicate Returned Information: 0 00:30:47.000 Explicit Persistent Connection Support for Discovery: 0 00:30:47.000 Transport Requirements: 00:30:47.000 Secure Channel: Not Required 00:30:47.000 Port ID: 0 (0x0000) 00:30:47.000 Controller ID: 65535 (0xffff) 00:30:47.000 Admin Max SQ Size: 128 00:30:47.000 Transport Service Identifier: 4420 00:30:47.000 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:47.000 Transport Address: 10.0.0.2 [2024-11-10 00:04:13.111875] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:47.000 [2024-11-10 00:04:13.111909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.000 [2024-11-10 00:04:13.111947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.000 [2024-11-10 00:04:13.111964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:47.000 [2024-11-10 00:04:13.111979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.000 [2024-11-10 00:04:13.111991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:47.000 [2024-11-10 00:04:13.112004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.000 [2024-11-10 00:04:13.112016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.000 [2024-11-10 00:04:13.112036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.000 [2024-11-10 00:04:13.112061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.000 [2024-11-10 00:04:13.112076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.000 [2024-11-10 00:04:13.112088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.000 [2024-11-10 00:04:13.112116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.000 [2024-11-10 00:04:13.112154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.000 [2024-11-10 00:04:13.112387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.000 [2024-11-10 00:04:13.112409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.000 [2024-11-10 00:04:13.112421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.000 [2024-11-10 00:04:13.112433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.000 [2024-11-10 00:04:13.112458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.000 [2024-11-10 00:04:13.112488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.000 [2024-11-10 00:04:13.112500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.000 [2024-11-10 00:04:13.112526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.000 [2024-11-10 00:04:13.112582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.112753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.112774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.112785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.112796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.112817] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:47.001 [2024-11-10 00:04:13.112833] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:47.001 [2024-11-10 00:04:13.112861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.112877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.112889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.112923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.112956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.113123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.113144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.113156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.113195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.113240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.113270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.113408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.113429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.113440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.113478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.113522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.113552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.113665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.113687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.113699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.113737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.113781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.113812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.113927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.113949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.113960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.113971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.113998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.114042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.114073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.114228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.114249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.114260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.114298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.114348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.114380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.114516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.114536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.114548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.114592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.114639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.114669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.114777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.114797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.114809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.114846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.114872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.114890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.114920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.115026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.115047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.115059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.115070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.115096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.115111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.115122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.115140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.115170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.115279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.001 [2024-11-10 00:04:13.115301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.001 [2024-11-10 00:04:13.115312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.115337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.001 [2024-11-10 00:04:13.115366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.115382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.001 [2024-11-10 00:04:13.115393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.001 [2024-11-10 00:04:13.115411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.001 [2024-11-10 00:04:13.115441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.001 [2024-11-10 00:04:13.115577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.002 [2024-11-10 00:04:13.119632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.002 [2024-11-10 00:04:13.119648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.002 [2024-11-10 00:04:13.119659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.002 [2024-11-10 00:04:13.119702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.002 [2024-11-10 00:04:13.119719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.002 [2024-11-10 00:04:13.119729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.002 [2024-11-10 00:04:13.119748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.002 [2024-11-10 00:04:13.119781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.002 [2024-11-10 00:04:13.119923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.002 [2024-11-10 00:04:13.119944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.002 [2024-11-10 00:04:13.119956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.002 [2024-11-10 00:04:13.119967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.002 [2024-11-10 00:04:13.119989] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:30:47.002 00:30:47.002 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:47.263 [2024-11-10 00:04:13.225925] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:47.263 [2024-11-10 00:04:13.226016] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573290 ] 00:30:47.263 [2024-11-10 00:04:13.304209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:47.263 [2024-11-10 00:04:13.304343] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:47.263 [2024-11-10 00:04:13.304364] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:47.263 [2024-11-10 00:04:13.304402] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:47.263 [2024-11-10 00:04:13.304428] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:47.263 [2024-11-10 00:04:13.305252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:47.263 [2024-11-10 00:04:13.305336] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:47.263 [2024-11-10 00:04:13.315946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:47.263 [2024-11-10 00:04:13.316001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:47.263 [2024-11-10 00:04:13.316035] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:47.263 [2024-11-10 00:04:13.316046] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:47.263 [2024-11-10 00:04:13.316124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.263 [2024-11-10 00:04:13.316147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.263 [2024-11-10 00:04:13.316168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.263 [2024-11-10 00:04:13.316204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:47.263 [2024-11-10 00:04:13.316252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.263 [2024-11-10 00:04:13.323606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.263 [2024-11-10 00:04:13.323651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.263 [2024-11-10 00:04:13.323671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.263 [2024-11-10 00:04:13.323686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.263 [2024-11-10 00:04:13.323732] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:47.263 [2024-11-10 00:04:13.323756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:47.264 [2024-11-10 00:04:13.323774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:47.264 [2024-11-10 00:04:13.323809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.323824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.323836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.323857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.323907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.324068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.324092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.324105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.324146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:47.264 [2024-11-10 00:04:13.324171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:47.264 [2024-11-10 00:04:13.324197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.324263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.324298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.324453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.324475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.324487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.324514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:47.264 [2024-11-10 00:04:13.324538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:47.264 [2024-11-10 00:04:13.324559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.324620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.324666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.324810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.324836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.324850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.324877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:47.264 [2024-11-10 00:04:13.324906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.324950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.324968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.325000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.325157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.325180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.325191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.325202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.325218] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:47.264 [2024-11-10 00:04:13.325240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:47.264 [2024-11-10 00:04:13.325263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:47.264 [2024-11-10 00:04:13.325386] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:47.264 [2024-11-10 00:04:13.325402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:47.264 [2024-11-10 00:04:13.325428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.325441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.325452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.325471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.325502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.325646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.325669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.325680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.325691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.325706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:47.264 [2024-11-10 00:04:13.325740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.325757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.325768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.325792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.325825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.325979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.326001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.326013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.326043] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:47.264 [2024-11-10 00:04:13.326059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:47.264 [2024-11-10 00:04:13.326081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:47.264 [2024-11-10 00:04:13.326124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:47.264 [2024-11-10 00:04:13.326156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.264 [2024-11-10 00:04:13.326196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.264 [2024-11-10 00:04:13.326227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.264 [2024-11-10 00:04:13.326440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.264 [2024-11-10 00:04:13.326464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.264 [2024-11-10 00:04:13.326476] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326496] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:47.264 [2024-11-10 00:04:13.326512] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.264 [2024-11-10 00:04:13.326525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326548] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326562] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.264 [2024-11-10 00:04:13.326629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.264 [2024-11-10 00:04:13.326641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.264 [2024-11-10 00:04:13.326651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.264 [2024-11-10 00:04:13.326676] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:47.265 [2024-11-10 00:04:13.326693] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:47.265 [2024-11-10 00:04:13.326715] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:47.265 [2024-11-10 00:04:13.326733] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:47.265 [2024-11-10 00:04:13.326747] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:47.265 [2024-11-10 00:04:13.326760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.326793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.326815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.326829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.326841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.326861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.265 [2024-11-10 00:04:13.326908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.265 [2024-11-10 00:04:13.327048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.265 [2024-11-10 00:04:13.327069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.265 [2024-11-10 00:04:13.327081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.265 [2024-11-10 00:04:13.327114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.327174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.265 [2024-11-10 00:04:13.327193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.327230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.265 [2024-11-10 00:04:13.327246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.327299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.265 [2024-11-10 00:04:13.327315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.327373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.265 [2024-11-10 00:04:13.327387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.327428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.327450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.327462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.327481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.265 [2024-11-10 00:04:13.327514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:47.265 [2024-11-10 00:04:13.327552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:47.265 [2024-11-10 00:04:13.327566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:47.265 [2024-11-10 00:04:13.327578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.265 [2024-11-10 00:04:13.331606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.265 [2024-11-10 00:04:13.331632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.265 [2024-11-10 00:04:13.331655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.265 [2024-11-10 00:04:13.331667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.331678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.265 [2024-11-10 00:04:13.331695] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:47.265 [2024-11-10 00:04:13.331716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.331756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.331776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.331794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.331808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.331820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.331839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:47.265 [2024-11-10 00:04:13.331873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.265 [2024-11-10 00:04:13.332002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.265 [2024-11-10 00:04:13.332025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.265 [2024-11-10 00:04:13.332037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.265 [2024-11-10 00:04:13.332152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.332213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.332242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.332281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.265 [2024-11-10 00:04:13.332313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.265 [2024-11-10 00:04:13.332480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.265 [2024-11-10 00:04:13.332502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.265 [2024-11-10 00:04:13.332514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332524] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:47.265 [2024-11-10 00:04:13.332536] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.265 [2024-11-10 00:04:13.332552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332576] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332598] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.265 [2024-11-10 00:04:13.332655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.265 [2024-11-10 00:04:13.332666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.265 [2024-11-10 00:04:13.332727] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:47.265 [2024-11-10 00:04:13.332765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.332807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:47.265 [2024-11-10 00:04:13.332835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.265 [2024-11-10 00:04:13.332850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.265 [2024-11-10 00:04:13.332874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.265 [2024-11-10 00:04:13.332923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.265 [2024-11-10 00:04:13.333129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.266 [2024-11-10 00:04:13.333150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.266 [2024-11-10 00:04:13.333162] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333172] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:47.266 [2024-11-10 00:04:13.333184] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.266 [2024-11-10 00:04:13.333195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333225] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.333268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.333281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.333334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.333446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.333478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.266 [2024-11-10 00:04:13.333646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.266 [2024-11-10 00:04:13.333669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.266 [2024-11-10 00:04:13.333698] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333709] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:47.266 [2024-11-10 00:04:13.333721] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.266 [2024-11-10 00:04:13.333733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333755] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333769] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.333820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.333831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.333842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.333872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.333999] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:47.266 [2024-11-10 00:04:13.334020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:47.266 [2024-11-10 00:04:13.334036] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:47.266 [2024-11-10 00:04:13.334085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.334121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.334144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.334202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:47.266 [2024-11-10 00:04:13.334242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.266 [2024-11-10 00:04:13.334276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:47.266 [2024-11-10 00:04:13.334516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.334542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.334554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.334597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.334616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.334627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.334663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.334704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.334740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:47.266 [2024-11-10 00:04:13.334878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.334899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.334911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.334947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.334973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.334996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.335027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:47.266 [2024-11-10 00:04:13.335164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.335185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.335196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.335207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.335232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.335248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.335274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.335306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:47.266 [2024-11-10 00:04:13.335417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.266 [2024-11-10 00:04:13.335440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.266 [2024-11-10 00:04:13.335451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.335462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:47.266 [2024-11-10 00:04:13.335505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.335523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.335543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.335566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.266 [2024-11-10 00:04:13.335581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:47.266 [2024-11-10 00:04:13.339635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.266 [2024-11-10 00:04:13.339671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.339688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:47.267 [2024-11-10 00:04:13.339712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.267 [2024-11-10 00:04:13.339757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.339773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:47.267 [2024-11-10 00:04:13.339791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.267 [2024-11-10 00:04:13.339824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:47.267 [2024-11-10 00:04:13.339859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:47.267 [2024-11-10 00:04:13.339872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:47.267 [2024-11-10 00:04:13.339883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:47.267 [2024-11-10 00:04:13.340187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.267 [2024-11-10 00:04:13.340228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.267 [2024-11-10 00:04:13.340240] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340252] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:47.267 [2024-11-10 00:04:13.340264] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:47.267 [2024-11-10 00:04:13.340276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340321] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.267 [2024-11-10 00:04:13.340351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.267 [2024-11-10 00:04:13.340362] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340372] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:47.267 [2024-11-10 00:04:13.340384] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:47.267 [2024-11-10 00:04:13.340395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340418] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340432] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.267 [2024-11-10 00:04:13.340465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.267 [2024-11-10 00:04:13.340477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340487] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:47.267 [2024-11-10 00:04:13.340499] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:47.267 [2024-11-10 00:04:13.340510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340526] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340538] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:47.267 [2024-11-10 00:04:13.340570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:47.267 [2024-11-10 00:04:13.340582] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340601] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:47.267 [2024-11-10 00:04:13.340618] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:47.267 [2024-11-10 00:04:13.340634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340657] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.267 [2024-11-10 00:04:13.340704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.267 [2024-11-10 00:04:13.340715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:47.267 [2024-11-10 00:04:13.340764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.267 [2024-11-10 00:04:13.340789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.267 [2024-11-10 00:04:13.340802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:47.267 [2024-11-10 00:04:13.340842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.267 [2024-11-10 00:04:13.340859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.267 [2024-11-10 00:04:13.340886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:47.267 [2024-11-10 00:04:13.340916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.267 [2024-11-10 00:04:13.340932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.267 [2024-11-10 00:04:13.340958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.267 [2024-11-10 00:04:13.340968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:47.267 ===================================================== 00:30:47.267 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.267 ===================================================== 00:30:47.267 Controller Capabilities/Features 00:30:47.267 ================================ 00:30:47.267 Vendor ID: 8086 00:30:47.267 Subsystem Vendor ID: 8086 00:30:47.267 Serial Number: SPDK00000000000001 00:30:47.267 Model Number: SPDK bdev Controller 00:30:47.267 Firmware Version: 25.01 00:30:47.267 Recommended Arb Burst: 6 00:30:47.267 IEEE OUI Identifier: e4 d2 5c 00:30:47.267 Multi-path I/O 00:30:47.267 May have multiple subsystem ports: Yes 00:30:47.267 May have multiple controllers: Yes 00:30:47.267 Associated with SR-IOV VF: No 00:30:47.267 Max Data Transfer Size: 131072 00:30:47.267 Max Number of Namespaces: 32 00:30:47.267 Max Number of I/O Queues: 127 00:30:47.267 NVMe Specification Version (VS): 1.3 00:30:47.267 NVMe Specification Version (Identify): 1.3 00:30:47.267 Maximum Queue Entries: 128 00:30:47.267 Contiguous Queues Required: Yes 00:30:47.267 Arbitration Mechanisms Supported 00:30:47.267 Weighted Round Robin: Not Supported 00:30:47.267 Vendor Specific: Not Supported 00:30:47.267 Reset Timeout: 15000 ms 00:30:47.267 Doorbell Stride: 4 bytes 00:30:47.267 NVM Subsystem Reset: Not Supported 00:30:47.267 Command Sets Supported 00:30:47.267 NVM Command Set: Supported 00:30:47.267 Boot Partition: Not Supported 00:30:47.267 Memory Page Size Minimum: 4096 bytes 00:30:47.267 Memory Page Size Maximum: 4096 bytes 00:30:47.267 Persistent Memory Region: Not Supported 00:30:47.267 Optional Asynchronous Events Supported 00:30:47.267 Namespace Attribute Notices: Supported 00:30:47.267 Firmware Activation Notices: Not Supported 00:30:47.267 ANA Change Notices: Not Supported 00:30:47.267 PLE Aggregate Log Change Notices: Not Supported 00:30:47.267 LBA Status Info Alert Notices: Not Supported 00:30:47.267 EGE Aggregate Log Change Notices: Not Supported 00:30:47.267 Normal NVM Subsystem Shutdown event: Not Supported 00:30:47.267 Zone Descriptor Change Notices: Not Supported 00:30:47.267 Discovery Log Change Notices: Not Supported 00:30:47.267 Controller Attributes 00:30:47.267 128-bit Host Identifier: Supported 00:30:47.267 Non-Operational Permissive Mode: Not Supported 00:30:47.267 NVM Sets: Not Supported 00:30:47.267 Read Recovery Levels: Not Supported 00:30:47.267 Endurance Groups: Not Supported 00:30:47.267 Predictable Latency Mode: Not Supported 00:30:47.267 Traffic Based Keep ALive: Not Supported 00:30:47.267 Namespace Granularity: Not Supported 00:30:47.267 SQ Associations: Not Supported 00:30:47.267 UUID List: Not Supported 00:30:47.267 Multi-Domain Subsystem: Not Supported 00:30:47.267 Fixed Capacity Management: Not Supported 00:30:47.267 Variable Capacity Management: Not Supported 00:30:47.267 Delete Endurance Group: Not Supported 00:30:47.267 Delete NVM Set: Not Supported 00:30:47.267 Extended LBA Formats Supported: Not Supported 00:30:47.267 Flexible Data Placement Supported: Not Supported 00:30:47.267 00:30:47.268 Controller Memory Buffer Support 00:30:47.268 ================================ 00:30:47.268 Supported: No 00:30:47.268 00:30:47.268 Persistent Memory Region Support 00:30:47.268 ================================ 00:30:47.268 Supported: No 00:30:47.268 00:30:47.268 Admin Command Set Attributes 00:30:47.268 ============================ 00:30:47.268 Security Send/Receive: Not Supported 00:30:47.268 Format NVM: Not Supported 00:30:47.268 Firmware Activate/Download: Not Supported 00:30:47.268 Namespace Management: Not Supported 00:30:47.268 Device Self-Test: Not Supported 00:30:47.268 Directives: Not Supported 00:30:47.268 NVMe-MI: Not Supported 00:30:47.268 Virtualization Management: Not Supported 00:30:47.268 Doorbell Buffer Config: Not Supported 00:30:47.268 Get LBA Status Capability: Not Supported 00:30:47.268 Command & Feature Lockdown Capability: Not Supported 00:30:47.268 Abort Command Limit: 4 00:30:47.268 Async Event Request Limit: 4 00:30:47.268 Number of Firmware Slots: N/A 00:30:47.268 Firmware Slot 1 Read-Only: N/A 00:30:47.268 Firmware Activation Without Reset: N/A 00:30:47.268 Multiple Update Detection Support: N/A 00:30:47.268 Firmware Update Granularity: No Information Provided 00:30:47.268 Per-Namespace SMART Log: No 00:30:47.268 Asymmetric Namespace Access Log Page: Not Supported 00:30:47.268 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:47.268 Command Effects Log Page: Supported 00:30:47.268 Get Log Page Extended Data: Supported 00:30:47.268 Telemetry Log Pages: Not Supported 00:30:47.268 Persistent Event Log Pages: Not Supported 00:30:47.268 Supported Log Pages Log Page: May Support 00:30:47.268 Commands Supported & Effects Log Page: Not Supported 00:30:47.268 Feature Identifiers & Effects Log Page:May Support 00:30:47.268 NVMe-MI Commands & Effects Log Page: May Support 00:30:47.268 Data Area 4 for Telemetry Log: Not Supported 00:30:47.268 Error Log Page Entries Supported: 128 00:30:47.268 Keep Alive: Supported 00:30:47.268 Keep Alive Granularity: 10000 ms 00:30:47.268 00:30:47.268 NVM Command Set Attributes 00:30:47.268 ========================== 00:30:47.268 Submission Queue Entry Size 00:30:47.268 Max: 64 00:30:47.268 Min: 64 00:30:47.268 Completion Queue Entry Size 00:30:47.268 Max: 16 00:30:47.268 Min: 16 00:30:47.268 Number of Namespaces: 32 00:30:47.268 Compare Command: Supported 00:30:47.268 Write Uncorrectable Command: Not Supported 00:30:47.268 Dataset Management Command: Supported 00:30:47.268 Write Zeroes Command: Supported 00:30:47.268 Set Features Save Field: Not Supported 00:30:47.268 Reservations: Supported 00:30:47.268 Timestamp: Not Supported 00:30:47.268 Copy: Supported 00:30:47.268 Volatile Write Cache: Present 00:30:47.268 Atomic Write Unit (Normal): 1 00:30:47.268 Atomic Write Unit (PFail): 1 00:30:47.268 Atomic Compare & Write Unit: 1 00:30:47.268 Fused Compare & Write: Supported 00:30:47.268 Scatter-Gather List 00:30:47.268 SGL Command Set: Supported 00:30:47.268 SGL Keyed: Supported 00:30:47.268 SGL Bit Bucket Descriptor: Not Supported 00:30:47.268 SGL Metadata Pointer: Not Supported 00:30:47.268 Oversized SGL: Not Supported 00:30:47.268 SGL Metadata Address: Not Supported 00:30:47.268 SGL Offset: Supported 00:30:47.268 Transport SGL Data Block: Not Supported 00:30:47.268 Replay Protected Memory Block: Not Supported 00:30:47.268 00:30:47.268 Firmware Slot Information 00:30:47.268 ========================= 00:30:47.268 Active slot: 1 00:30:47.268 Slot 1 Firmware Revision: 25.01 00:30:47.268 00:30:47.268 00:30:47.268 Commands Supported and Effects 00:30:47.268 ============================== 00:30:47.268 Admin Commands 00:30:47.268 -------------- 00:30:47.268 Get Log Page (02h): Supported 00:30:47.268 Identify (06h): Supported 00:30:47.268 Abort (08h): Supported 00:30:47.268 Set Features (09h): Supported 00:30:47.268 Get Features (0Ah): Supported 00:30:47.268 Asynchronous Event Request (0Ch): Supported 00:30:47.268 Keep Alive (18h): Supported 00:30:47.268 I/O Commands 00:30:47.268 ------------ 00:30:47.268 Flush (00h): Supported LBA-Change 00:30:47.268 Write (01h): Supported LBA-Change 00:30:47.268 Read (02h): Supported 00:30:47.268 Compare (05h): Supported 00:30:47.268 Write Zeroes (08h): Supported LBA-Change 00:30:47.268 Dataset Management (09h): Supported LBA-Change 00:30:47.268 Copy (19h): Supported LBA-Change 00:30:47.268 00:30:47.268 Error Log 00:30:47.268 ========= 00:30:47.268 00:30:47.268 Arbitration 00:30:47.268 =========== 00:30:47.268 Arbitration Burst: 1 00:30:47.268 00:30:47.268 Power Management 00:30:47.268 ================ 00:30:47.268 Number of Power States: 1 00:30:47.268 Current Power State: Power State #0 00:30:47.268 Power State #0: 00:30:47.268 Max Power: 0.00 W 00:30:47.268 Non-Operational State: Operational 00:30:47.268 Entry Latency: Not Reported 00:30:47.268 Exit Latency: Not Reported 00:30:47.268 Relative Read Throughput: 0 00:30:47.268 Relative Read Latency: 0 00:30:47.268 Relative Write Throughput: 0 00:30:47.268 Relative Write Latency: 0 00:30:47.268 Idle Power: Not Reported 00:30:47.268 Active Power: Not Reported 00:30:47.268 Non-Operational Permissive Mode: Not Supported 00:30:47.268 00:30:47.268 Health Information 00:30:47.268 ================== 00:30:47.268 Critical Warnings: 00:30:47.268 Available Spare Space: OK 00:30:47.268 Temperature: OK 00:30:47.268 Device Reliability: OK 00:30:47.268 Read Only: No 00:30:47.268 Volatile Memory Backup: OK 00:30:47.268 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:47.268 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:47.268 Available Spare: 0% 00:30:47.268 Available Spare Threshold: 0% 00:30:47.268 Life Percentage Used:[2024-11-10 00:04:13.341182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.268 [2024-11-10 00:04:13.341201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:47.268 [2024-11-10 00:04:13.341221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.268 [2024-11-10 00:04:13.341255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:47.268 [2024-11-10 00:04:13.341399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.268 [2024-11-10 00:04:13.341423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.268 [2024-11-10 00:04:13.341436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.268 [2024-11-10 00:04:13.341454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:47.268 [2024-11-10 00:04:13.341533] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:47.268 [2024-11-10 00:04:13.341565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:47.268 [2024-11-10 00:04:13.341597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.268 [2024-11-10 00:04:13.341616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.341631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.269 [2024-11-10 00:04:13.341649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.341663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.269 [2024-11-10 00:04:13.341680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.341696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.269 [2024-11-10 00:04:13.341718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.341732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.341744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.341763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.341804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.341946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.341973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.341986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.341998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.342019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.342084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.342126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.342286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.342308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.342319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.342346] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:47.269 [2024-11-10 00:04:13.342360] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:47.269 [2024-11-10 00:04:13.342392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.342455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.342487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.342641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.342668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.342680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.342724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.342774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.342805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.342919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.342941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.342953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.342964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.342991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.343007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.343017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.343035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.343066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.343210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.343232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.343243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.343254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.343291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.343312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.343324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.343358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.343388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.343554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.343577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.344647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.344665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.344694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.344710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.344721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:47.269 [2024-11-10 00:04:13.344745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.269 [2024-11-10 00:04:13.344778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:47.269 [2024-11-10 00:04:13.344919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:47.269 [2024-11-10 00:04:13.344940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:47.269 [2024-11-10 00:04:13.344952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:47.269 [2024-11-10 00:04:13.344963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:47.269 [2024-11-10 00:04:13.344990] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 2 milliseconds 00:30:47.269 0% 00:30:47.269 Data Units Read: 0 00:30:47.269 Data Units Written: 0 00:30:47.269 Host Read Commands: 0 00:30:47.269 Host Write Commands: 0 00:30:47.269 Controller Busy Time: 0 minutes 00:30:47.269 Power Cycles: 0 00:30:47.269 Power On Hours: 0 hours 00:30:47.269 Unsafe Shutdowns: 0 00:30:47.269 Unrecoverable Media Errors: 0 00:30:47.269 Lifetime Error Log Entries: 0 00:30:47.269 Warning Temperature Time: 0 minutes 00:30:47.269 Critical Temperature Time: 0 minutes 00:30:47.269 00:30:47.269 Number of Queues 00:30:47.269 ================ 00:30:47.269 Number of I/O Submission Queues: 127 00:30:47.269 Number of I/O Completion Queues: 127 00:30:47.269 00:30:47.269 Active Namespaces 00:30:47.269 ================= 00:30:47.269 Namespace ID:1 00:30:47.269 Error Recovery Timeout: Unlimited 00:30:47.269 Command Set Identifier: NVM (00h) 00:30:47.269 Deallocate: Supported 00:30:47.269 Deallocated/Unwritten Error: Not Supported 00:30:47.269 Deallocated Read Value: Unknown 00:30:47.269 Deallocate in Write Zeroes: Not Supported 00:30:47.269 Deallocated Guard Field: 0xFFFF 00:30:47.269 Flush: Supported 00:30:47.269 Reservation: Supported 00:30:47.269 Namespace Sharing Capabilities: Multiple Controllers 00:30:47.269 Size (in LBAs): 131072 (0GiB) 00:30:47.269 Capacity (in LBAs): 131072 (0GiB) 00:30:47.269 Utilization (in LBAs): 131072 (0GiB) 00:30:47.269 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:47.269 EUI64: ABCDEF0123456789 00:30:47.269 UUID: 90a78488-bbd0-47ea-852a-87020bcab333 00:30:47.269 Thin Provisioning: Not Supported 00:30:47.269 Per-NS Atomic Units: Yes 00:30:47.269 Atomic Boundary Size (Normal): 0 00:30:47.269 Atomic Boundary Size (PFail): 0 00:30:47.270 Atomic Boundary Offset: 0 00:30:47.270 Maximum Single Source Range Length: 65535 00:30:47.270 Maximum Copy Length: 65535 00:30:47.270 Maximum Source Range Count: 1 00:30:47.270 NGUID/EUI64 Never Reused: No 00:30:47.270 Namespace Write Protected: No 00:30:47.270 Number of LBA Formats: 1 00:30:47.270 Current LBA Format: LBA Format #00 00:30:47.270 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:47.270 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.270 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.270 rmmod nvme_tcp 00:30:47.270 rmmod nvme_fabrics 00:30:47.270 rmmod nvme_keyring 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3573046 ']' 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3573046 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3573046 ']' 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3573046 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3573046 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3573046' 00:30:47.529 killing process with pid 3573046 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3573046 00:30:47.529 00:04:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3573046 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.904 00:04:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:50.807 00:30:50.807 real 0m7.545s 00:30:50.807 user 0m11.095s 00:30:50.807 sys 0m2.202s 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.807 ************************************ 00:30:50.807 END TEST nvmf_identify 00:30:50.807 ************************************ 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.807 ************************************ 00:30:50.807 START TEST nvmf_perf 00:30:50.807 ************************************ 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:50.807 * Looking for test storage... 00:30:50.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.807 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:50.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.808 --rc genhtml_branch_coverage=1 00:30:50.808 --rc genhtml_function_coverage=1 00:30:50.808 --rc genhtml_legend=1 00:30:50.808 --rc geninfo_all_blocks=1 00:30:50.808 --rc geninfo_unexecuted_blocks=1 00:30:50.808 00:30:50.808 ' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:50.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.808 --rc genhtml_branch_coverage=1 00:30:50.808 --rc genhtml_function_coverage=1 00:30:50.808 --rc genhtml_legend=1 00:30:50.808 --rc geninfo_all_blocks=1 00:30:50.808 --rc geninfo_unexecuted_blocks=1 00:30:50.808 00:30:50.808 ' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:50.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.808 --rc genhtml_branch_coverage=1 00:30:50.808 --rc genhtml_function_coverage=1 00:30:50.808 --rc genhtml_legend=1 00:30:50.808 --rc geninfo_all_blocks=1 00:30:50.808 --rc geninfo_unexecuted_blocks=1 00:30:50.808 00:30:50.808 ' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:50.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.808 --rc genhtml_branch_coverage=1 00:30:50.808 --rc genhtml_function_coverage=1 00:30:50.808 --rc genhtml_legend=1 00:30:50.808 --rc geninfo_all_blocks=1 00:30:50.808 --rc geninfo_unexecuted_blocks=1 00:30:50.808 00:30:50.808 ' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:50.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.808 00:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.349 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:53.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:53.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:53.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:53.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:30:53.350 00:30:53.350 --- 10.0.0.2 ping statistics --- 00:30:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.350 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:30:53.350 00:30:53.350 --- 10.0.0.1 ping statistics --- 00:30:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.350 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.350 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3575356 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3575356 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3575356 ']' 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:53.351 00:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:53.351 [2024-11-10 00:04:19.250074] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:30:53.351 [2024-11-10 00:04:19.250219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.351 [2024-11-10 00:04:19.398514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:53.351 [2024-11-10 00:04:19.537757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.351 [2024-11-10 00:04:19.537838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.351 [2024-11-10 00:04:19.537864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.351 [2024-11-10 00:04:19.537889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.351 [2024-11-10 00:04:19.537909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.351 [2024-11-10 00:04:19.540673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.351 [2024-11-10 00:04:19.540755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.351 [2024-11-10 00:04:19.540836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.351 [2024-11-10 00:04:19.540842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.293 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:54.293 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:30:54.293 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.293 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:54.293 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:54.293 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.294 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:54.294 00:04:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:57.572 00:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:57.572 00:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:57.572 00:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:57.572 00:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.137 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:58.137 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:58.137 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:58.137 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:58.137 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:58.395 [2024-11-10 00:04:24.342045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.395 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.652 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:58.652 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.910 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:58.910 00:04:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:59.171 00:04:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.432 [2024-11-10 00:04:25.456286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.432 00:04:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:59.689 00:04:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:59.689 00:04:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:59.689 00:04:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:59.690 00:04:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:01.077 Initializing NVMe Controllers 00:31:01.077 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:31:01.077 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:31:01.077 Initialization complete. Launching workers. 00:31:01.077 ======================================================== 00:31:01.077 Latency(us) 00:31:01.077 Device Information : IOPS MiB/s Average min max 00:31:01.077 PCIE (0000:88:00.0) NSID 1 from core 0: 73481.50 287.04 434.72 49.56 6333.85 00:31:01.077 ======================================================== 00:31:01.077 Total : 73481.50 287.04 434.72 49.56 6333.85 00:31:01.077 00:31:01.077 00:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:02.975 Initializing NVMe Controllers 00:31:02.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:02.975 Initialization complete. Launching workers. 00:31:02.975 ======================================================== 00:31:02.975 Latency(us) 00:31:02.975 Device Information : IOPS MiB/s Average min max 00:31:02.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 118.00 0.46 8521.37 195.40 45812.84 00:31:02.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.00 0.18 21543.84 7922.09 47979.97 00:31:02.975 ======================================================== 00:31:02.975 Total : 165.00 0.64 12230.80 195.40 47979.97 00:31:02.975 00:31:02.975 00:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.914 Initializing NVMe Controllers 00:31:03.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:03.914 Initialization complete. Launching workers. 00:31:03.914 ======================================================== 00:31:03.914 Latency(us) 00:31:03.914 Device Information : IOPS MiB/s Average min max 00:31:03.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5617.99 21.95 5722.42 839.98 12393.59 00:31:03.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3826.00 14.95 8403.77 6047.05 16488.07 00:31:03.914 ======================================================== 00:31:03.914 Total : 9443.99 36.89 6808.70 839.98 16488.07 00:31:03.914 00:31:04.172 00:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:04.172 00:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:04.172 00:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.528 Initializing NVMe Controllers 00:31:07.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.528 Controller IO queue size 128, less than required. 00:31:07.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.528 Controller IO queue size 128, less than required. 00:31:07.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:07.528 Initialization complete. Launching workers. 00:31:07.528 ======================================================== 00:31:07.528 Latency(us) 00:31:07.528 Device Information : IOPS MiB/s Average min max 00:31:07.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1336.92 334.23 99761.09 60743.94 307651.75 00:31:07.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 514.97 128.74 262312.65 137069.11 453736.78 00:31:07.528 ======================================================== 00:31:07.528 Total : 1851.89 462.97 144963.06 60743.94 453736.78 00:31:07.528 00:31:07.528 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:07.528 No valid NVMe controllers or AIO or URING devices found 00:31:07.528 Initializing NVMe Controllers 00:31:07.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.528 Controller IO queue size 128, less than required. 00:31:07.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.528 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:07.528 Controller IO queue size 128, less than required. 00:31:07.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.528 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:07.528 WARNING: Some requested NVMe devices were skipped 00:31:07.528 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:10.813 Initializing NVMe Controllers 00:31:10.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.813 Controller IO queue size 128, less than required. 00:31:10.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.813 Controller IO queue size 128, less than required. 00:31:10.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:10.813 Initialization complete. Launching workers. 00:31:10.813 00:31:10.813 ==================== 00:31:10.813 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:10.813 TCP transport: 00:31:10.813 polls: 3690 00:31:10.813 idle_polls: 1349 00:31:10.813 sock_completions: 2341 00:31:10.813 nvme_completions: 4769 00:31:10.813 submitted_requests: 7246 00:31:10.813 queued_requests: 1 00:31:10.813 00:31:10.813 ==================== 00:31:10.813 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:10.813 TCP transport: 00:31:10.813 polls: 8344 00:31:10.813 idle_polls: 5580 00:31:10.813 sock_completions: 2764 00:31:10.813 nvme_completions: 5255 00:31:10.813 submitted_requests: 7930 00:31:10.813 queued_requests: 1 00:31:10.813 ======================================================== 00:31:10.813 Latency(us) 00:31:10.813 Device Information : IOPS MiB/s Average min max 00:31:10.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1190.64 297.66 111388.13 65255.54 293664.27 00:31:10.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1312.00 328.00 102225.80 55332.74 418957.63 00:31:10.813 ======================================================== 00:31:10.813 Total : 2502.64 625.66 106584.81 55332.74 418957.63 00:31:10.813 00:31:10.813 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:10.813 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.813 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:10.813 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:10.813 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=49457c18-1130-4bbd-84c0-a87e00529021 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 49457c18-1130-4bbd-84c0-a87e00529021 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=49457c18-1130-4bbd-84c0-a87e00529021 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:31:14.091 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:14.349 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:31:14.349 { 00:31:14.349 "uuid": "49457c18-1130-4bbd-84c0-a87e00529021", 00:31:14.349 "name": "lvs_0", 00:31:14.349 "base_bdev": "Nvme0n1", 00:31:14.349 "total_data_clusters": 238234, 00:31:14.349 "free_clusters": 238234, 00:31:14.349 "block_size": 512, 00:31:14.349 "cluster_size": 4194304 00:31:14.349 } 00:31:14.349 ]' 00:31:14.349 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="49457c18-1130-4bbd-84c0-a87e00529021") .free_clusters' 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=238234 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="49457c18-1130-4bbd-84c0-a87e00529021") .cluster_size' 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=952936 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 952936 00:31:14.607 952936 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:14.607 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 49457c18-1130-4bbd-84c0-a87e00529021 lbd_0 20480 00:31:14.864 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ece77528-4801-42e9-97f9-57d743da1f93 00:31:14.865 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ece77528-4801-42e9-97f9-57d743da1f93 lvs_n_0 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=25dee5c4-5b44-4325-bfce-5eac18d8d3c7 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 25dee5c4-5b44-4325-bfce-5eac18d8d3c7 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=25dee5c4-5b44-4325-bfce-5eac18d8d3c7 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:31:15.808 00:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:31:16.066 { 00:31:16.066 "uuid": "49457c18-1130-4bbd-84c0-a87e00529021", 00:31:16.066 "name": "lvs_0", 00:31:16.066 "base_bdev": "Nvme0n1", 00:31:16.066 "total_data_clusters": 238234, 00:31:16.066 "free_clusters": 233114, 00:31:16.066 "block_size": 512, 00:31:16.066 "cluster_size": 4194304 00:31:16.066 }, 00:31:16.066 { 00:31:16.066 "uuid": "25dee5c4-5b44-4325-bfce-5eac18d8d3c7", 00:31:16.066 "name": "lvs_n_0", 00:31:16.066 "base_bdev": "ece77528-4801-42e9-97f9-57d743da1f93", 00:31:16.066 "total_data_clusters": 5114, 00:31:16.066 "free_clusters": 5114, 00:31:16.066 "block_size": 512, 00:31:16.066 "cluster_size": 4194304 00:31:16.066 } 00:31:16.066 ]' 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="25dee5c4-5b44-4325-bfce-5eac18d8d3c7") .free_clusters' 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="25dee5c4-5b44-4325-bfce-5eac18d8d3c7") .cluster_size' 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:31:16.066 20456 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:16.066 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 25dee5c4-5b44-4325-bfce-5eac18d8d3c7 lbd_nest_0 20456 00:31:16.632 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=334529aa-03b8-4452-8f38-ed06ecd116c2 00:31:16.632 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.632 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:16.632 00:04:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 334529aa-03b8-4452-8f38-ed06ecd116c2 00:31:16.890 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.147 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:17.147 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:17.147 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:17.147 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:17.147 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.339 Initializing NVMe Controllers 00:31:29.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:29.339 Initialization complete. Launching workers. 00:31:29.339 ======================================================== 00:31:29.339 Latency(us) 00:31:29.339 Device Information : IOPS MiB/s Average min max 00:31:29.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.58 0.02 21098.69 237.63 46466.29 00:31:29.339 ======================================================== 00:31:29.339 Total : 47.58 0.02 21098.69 237.63 46466.29 00:31:29.339 00:31:29.339 00:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:29.339 00:04:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.318 Initializing NVMe Controllers 00:31:39.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.318 Initialization complete. Launching workers. 00:31:39.318 ======================================================== 00:31:39.318 Latency(us) 00:31:39.318 Device Information : IOPS MiB/s Average min max 00:31:39.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.00 9.25 13529.28 5980.16 51876.33 00:31:39.318 ======================================================== 00:31:39.318 Total : 74.00 9.25 13529.28 5980.16 51876.33 00:31:39.318 00:31:39.318 00:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:39.318 00:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:39.318 00:05:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.284 Initializing NVMe Controllers 00:31:49.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.284 Initialization complete. Launching workers. 00:31:49.284 ======================================================== 00:31:49.284 Latency(us) 00:31:49.284 Device Information : IOPS MiB/s Average min max 00:31:49.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4733.85 2.31 6759.18 657.10 13708.48 00:31:49.284 ======================================================== 00:31:49.284 Total : 4733.85 2.31 6759.18 657.10 13708.48 00:31:49.284 00:31:49.284 00:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:49.284 00:05:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:59.251 Initializing NVMe Controllers 00:31:59.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.251 Initialization complete. Launching workers. 00:31:59.251 ======================================================== 00:31:59.251 Latency(us) 00:31:59.251 Device Information : IOPS MiB/s Average min max 00:31:59.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3577.22 447.15 8946.41 1415.41 21018.54 00:31:59.251 ======================================================== 00:31:59.251 Total : 3577.22 447.15 8946.41 1415.41 21018.54 00:31:59.251 00:31:59.251 00:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:59.251 00:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:59.251 00:05:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:11.470 Initializing NVMe Controllers 00:32:11.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:11.470 Controller IO queue size 128, less than required. 00:32:11.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:11.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:11.470 Initialization complete. Launching workers. 00:32:11.470 ======================================================== 00:32:11.470 Latency(us) 00:32:11.470 Device Information : IOPS MiB/s Average min max 00:32:11.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8465.65 4.13 15128.98 2233.74 32274.83 00:32:11.470 ======================================================== 00:32:11.470 Total : 8465.65 4.13 15128.98 2233.74 32274.83 00:32:11.470 00:32:11.470 00:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:11.470 00:05:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:21.490 Initializing NVMe Controllers 00:32:21.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.490 Controller IO queue size 128, less than required. 00:32:21.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:21.490 Initialization complete. Launching workers. 00:32:21.490 ======================================================== 00:32:21.490 Latency(us) 00:32:21.490 Device Information : IOPS MiB/s Average min max 00:32:21.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1164.60 145.57 110586.75 10188.03 233593.93 00:32:21.490 ======================================================== 00:32:21.490 Total : 1164.60 145.57 110586.75 10188.03 233593.93 00:32:21.490 00:32:21.490 00:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.490 00:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 334529aa-03b8-4452-8f38-ed06ecd116c2 00:32:21.490 00:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:21.746 00:05:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ece77528-4801-42e9-97f9-57d743da1f93 00:32:22.002 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.568 rmmod nvme_tcp 00:32:22.568 rmmod nvme_fabrics 00:32:22.568 rmmod nvme_keyring 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3575356 ']' 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3575356 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3575356 ']' 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3575356 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3575356 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3575356' 00:32:22.568 killing process with pid 3575356 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3575356 00:32:22.568 00:05:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3575356 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.135 00:05:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.038 00:32:27.038 real 1m36.185s 00:32:27.038 user 5m55.848s 00:32:27.038 sys 0m15.717s 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:27.038 ************************************ 00:32:27.038 END TEST nvmf_perf 00:32:27.038 ************************************ 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.038 ************************************ 00:32:27.038 START TEST nvmf_fio_host 00:32:27.038 ************************************ 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:27.038 * Looking for test storage... 00:32:27.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:27.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.038 --rc genhtml_branch_coverage=1 00:32:27.038 --rc genhtml_function_coverage=1 00:32:27.038 --rc genhtml_legend=1 00:32:27.038 --rc geninfo_all_blocks=1 00:32:27.038 --rc geninfo_unexecuted_blocks=1 00:32:27.038 00:32:27.038 ' 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:27.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.038 --rc genhtml_branch_coverage=1 00:32:27.038 --rc genhtml_function_coverage=1 00:32:27.038 --rc genhtml_legend=1 00:32:27.038 --rc geninfo_all_blocks=1 00:32:27.038 --rc geninfo_unexecuted_blocks=1 00:32:27.038 00:32:27.038 ' 00:32:27.038 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:27.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.039 --rc genhtml_branch_coverage=1 00:32:27.039 --rc genhtml_function_coverage=1 00:32:27.039 --rc genhtml_legend=1 00:32:27.039 --rc geninfo_all_blocks=1 00:32:27.039 --rc geninfo_unexecuted_blocks=1 00:32:27.039 00:32:27.039 ' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:27.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.039 --rc genhtml_branch_coverage=1 00:32:27.039 --rc genhtml_function_coverage=1 00:32:27.039 --rc genhtml_legend=1 00:32:27.039 --rc geninfo_all_blocks=1 00:32:27.039 --rc geninfo_unexecuted_blocks=1 00:32:27.039 00:32:27.039 ' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.039 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.298 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.298 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.298 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.298 00:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.206 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:32:29.465 00:32:29.465 --- 10.0.0.2 ping statistics --- 00:32:29.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.465 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:32:29.465 00:32:29.465 --- 10.0.0.1 ping statistics --- 00:32:29.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.465 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3587977 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3587977 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3587977 ']' 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:29.465 00:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.465 [2024-11-10 00:05:55.557425] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:32:29.465 [2024-11-10 00:05:55.557582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.723 [2024-11-10 00:05:55.703281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:29.723 [2024-11-10 00:05:55.838963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.723 [2024-11-10 00:05:55.839039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.723 [2024-11-10 00:05:55.839065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.723 [2024-11-10 00:05:55.839089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.723 [2024-11-10 00:05:55.839109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.723 [2024-11-10 00:05:55.841895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.723 [2024-11-10 00:05:55.841976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.723 [2024-11-10 00:05:55.842068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.723 [2024-11-10 00:05:55.842073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:30.657 [2024-11-10 00:05:56.804913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.657 00:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:31.221 Malloc1 00:32:31.221 00:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:31.478 00:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:31.736 00:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.993 [2024-11-10 00:05:57.974422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.994 00:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.251 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:32.251 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.251 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.251 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:32.251 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:32.251 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:32.252 00:05:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:32.509 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:32.509 fio-3.35 00:32:32.509 Starting 1 thread 00:32:35.071 00:32:35.071 test: (groupid=0, jobs=1): err= 0: pid=3588429: Sun Nov 10 00:06:00 2024 00:32:35.071 read: IOPS=5803, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2010msec) 00:32:35.071 slat (usec): min=3, max=125, avg= 3.84, stdev= 2.12 00:32:35.071 clat (usec): min=3763, max=21840, avg=11959.25, stdev=1123.10 00:32:35.071 lat (usec): min=3796, max=21844, avg=11963.09, stdev=1123.04 00:32:35.071 clat percentiles (usec): 00:32:35.071 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:32:35.071 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:32:35.071 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:32:35.071 | 99.00th=[14484], 99.50th=[15008], 99.90th=[19792], 99.95th=[20317], 00:32:35.071 | 99.99th=[21627] 00:32:35.071 bw ( KiB/s): min=22259, max=23888, per=99.84%, avg=23176.75, stdev=675.92, samples=4 00:32:35.071 iops : min= 5564, max= 5972, avg=5794.00, stdev=169.32, samples=4 00:32:35.071 write: IOPS=5788, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2010msec); 0 zone resets 00:32:35.071 slat (nsec): min=3116, max=98772, avg=3927.27, stdev=1709.62 00:32:35.071 clat (usec): min=1304, max=17296, avg=9963.32, stdev=911.12 00:32:35.071 lat (usec): min=1316, max=17300, avg=9967.25, stdev=911.05 00:32:35.071 clat percentiles (usec): 00:32:35.071 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:32:35.071 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:32:35.071 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:32:35.071 | 99.00th=[11863], 99.50th=[12256], 99.90th=[16450], 99.95th=[16909], 00:32:35.071 | 99.99th=[17171] 00:32:35.071 bw ( KiB/s): min=22848, max=23552, per=99.96%, avg=23144.25, stdev=297.18, samples=4 00:32:35.071 iops : min= 5712, max= 5888, avg=5786.00, stdev=74.30, samples=4 00:32:35.071 lat (msec) : 2=0.01%, 4=0.09%, 10=26.80%, 20=73.06%, 50=0.04% 00:32:35.071 cpu : usr=65.06%, sys=33.35%, ctx=67, majf=0, minf=1545 00:32:35.071 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:35.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:35.071 issued rwts: total=11665,11634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:35.071 00:32:35.071 Run status group 0 (all jobs): 00:32:35.071 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2010-2010msec 00:32:35.071 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.7MB), run=2010-2010msec 00:32:35.071 ----------------------------------------------------- 00:32:35.071 Suppressions used: 00:32:35.071 count bytes template 00:32:35.071 1 57 /usr/src/fio/parse.c 00:32:35.071 1 8 libtcmalloc_minimal.so 00:32:35.071 ----------------------------------------------------- 00:32:35.071 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:35.329 00:06:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:35.588 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:35.588 fio-3.35 00:32:35.588 Starting 1 thread 00:32:38.114 00:32:38.114 test: (groupid=0, jobs=1): err= 0: pid=3588903: Sun Nov 10 00:06:04 2024 00:32:38.114 read: IOPS=6143, BW=96.0MiB/s (101MB/s)(193MiB/2012msec) 00:32:38.114 slat (usec): min=3, max=108, avg= 5.46, stdev= 2.30 00:32:38.114 clat (usec): min=3478, max=22505, avg=11994.58, stdev=2731.81 00:32:38.114 lat (usec): min=3483, max=22510, avg=12000.03, stdev=2731.82 00:32:38.114 clat percentiles (usec): 00:32:38.114 | 1.00th=[ 6456], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9765], 00:32:38.114 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11863], 60.00th=[12387], 00:32:38.114 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15533], 95.00th=[17171], 00:32:38.114 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21890], 99.95th=[22152], 00:32:38.114 | 99.99th=[22414] 00:32:38.114 bw ( KiB/s): min=40224, max=55648, per=49.66%, avg=48808.00, stdev=7719.38, samples=4 00:32:38.114 iops : min= 2514, max= 3478, avg=3050.50, stdev=482.46, samples=4 00:32:38.114 write: IOPS=3534, BW=55.2MiB/s (57.9MB/s)(99.9MiB/1808msec); 0 zone resets 00:32:38.114 slat (usec): min=34, max=169, avg=38.42, stdev= 6.61 00:32:38.114 clat (usec): min=9525, max=28833, avg=16029.38, stdev=2644.46 00:32:38.114 lat (usec): min=9559, max=28869, avg=16067.81, stdev=2644.43 00:32:38.114 clat percentiles (usec): 00:32:38.114 | 1.00th=[10421], 5.00th=[11994], 10.00th=[12911], 20.00th=[13829], 00:32:38.114 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15795], 60.00th=[16450], 00:32:38.114 | 70.00th=[17171], 80.00th=[18220], 90.00th=[19530], 95.00th=[20841], 00:32:38.114 | 99.00th=[22938], 99.50th=[23725], 99.90th=[28443], 99.95th=[28443], 00:32:38.115 | 99.99th=[28705] 00:32:38.115 bw ( KiB/s): min=42944, max=56768, per=89.44%, avg=50584.00, stdev=7143.55, samples=4 00:32:38.115 iops : min= 2684, max= 3548, avg=3161.50, stdev=446.47, samples=4 00:32:38.115 lat (msec) : 4=0.04%, 10=15.04%, 20=81.97%, 50=2.94% 00:32:38.115 cpu : usr=79.71%, sys=18.95%, ctx=36, majf=0, minf=2114 00:32:38.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:32:38.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:38.115 issued rwts: total=12360,6391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:38.115 00:32:38.115 Run status group 0 (all jobs): 00:32:38.115 READ: bw=96.0MiB/s (101MB/s), 96.0MiB/s-96.0MiB/s (101MB/s-101MB/s), io=193MiB (203MB), run=2012-2012msec 00:32:38.115 WRITE: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=99.9MiB (105MB), run=1808-1808msec 00:32:38.372 ----------------------------------------------------- 00:32:38.372 Suppressions used: 00:32:38.372 count bytes template 00:32:38.372 1 57 /usr/src/fio/parse.c 00:32:38.372 176 16896 /usr/src/fio/iolog.c 00:32:38.372 1 8 libtcmalloc_minimal.so 00:32:38.372 ----------------------------------------------------- 00:32:38.372 00:32:38.372 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:32:38.630 00:06:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:41.914 Nvme0n1 00:32:41.914 00:06:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:45.190 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=173eea91-147c-4f5c-be9a-f8803b8c3a74 00:32:45.190 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 173eea91-147c-4f5c-be9a-f8803b8c3a74 00:32:45.190 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=173eea91-147c-4f5c-be9a-f8803b8c3a74 00:32:45.190 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:45.190 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:32:45.190 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:32:45.191 00:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:45.191 { 00:32:45.191 "uuid": "173eea91-147c-4f5c-be9a-f8803b8c3a74", 00:32:45.191 "name": "lvs_0", 00:32:45.191 "base_bdev": "Nvme0n1", 00:32:45.191 "total_data_clusters": 930, 00:32:45.191 "free_clusters": 930, 00:32:45.191 "block_size": 512, 00:32:45.191 "cluster_size": 1073741824 00:32:45.191 } 00:32:45.191 ]' 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="173eea91-147c-4f5c-be9a-f8803b8c3a74") .free_clusters' 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=930 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="173eea91-147c-4f5c-be9a-f8803b8c3a74") .cluster_size' 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=952320 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 952320 00:32:45.191 952320 00:32:45.191 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:45.448 2ea426d8-2962-4335-b8d0-dfeb0d91cfdc 00:32:45.448 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:45.706 00:06:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:45.963 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:46.221 00:06:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.479 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:46.479 fio-3.35 00:32:46.479 Starting 1 thread 00:32:49.005 00:32:49.006 test: (groupid=0, jobs=1): err= 0: pid=3590808: Sun Nov 10 00:06:15 2024 00:32:49.006 read: IOPS=4533, BW=17.7MiB/s (18.6MB/s)(35.6MiB/2011msec) 00:32:49.006 slat (usec): min=3, max=139, avg= 3.72, stdev= 2.18 00:32:49.006 clat (usec): min=1480, max=172558, avg=15276.11, stdev=13037.47 00:32:49.006 lat (usec): min=1484, max=172608, avg=15279.84, stdev=13037.81 00:32:49.006 clat percentiles (msec): 00:32:49.006 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:49.006 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:32:49.006 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 16], 95.00th=[ 17], 00:32:49.006 | 99.00th=[ 19], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:49.006 | 99.99th=[ 174] 00:32:49.006 bw ( KiB/s): min=12672, max=20136, per=99.84%, avg=18106.00, stdev=3626.25, samples=4 00:32:49.006 iops : min= 3168, max= 5034, avg=4526.50, stdev=906.56, samples=4 00:32:49.006 write: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(35.7MiB/2011msec); 0 zone resets 00:32:49.006 slat (usec): min=3, max=134, avg= 3.87, stdev= 1.85 00:32:49.006 clat (usec): min=404, max=170154, avg=12690.00, stdev=12254.02 00:32:49.006 lat (usec): min=408, max=170161, avg=12693.87, stdev=12254.38 00:32:49.006 clat percentiles (msec): 00:32:49.006 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:32:49.006 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:32:49.006 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:49.006 | 99.00th=[ 18], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:49.006 | 99.99th=[ 171] 00:32:49.006 bw ( KiB/s): min=13288, max=20032, per=99.86%, avg=18130.00, stdev=3234.44, samples=4 00:32:49.006 iops : min= 3322, max= 5008, avg=4532.50, stdev=808.61, samples=4 00:32:49.006 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:49.006 lat (msec) : 2=0.02%, 4=0.10%, 10=2.54%, 20=96.43%, 50=0.19% 00:32:49.006 lat (msec) : 250=0.70% 00:32:49.006 cpu : usr=68.31%, sys=30.35%, ctx=72, majf=0, minf=1543 00:32:49.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:32:49.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:49.006 issued rwts: total=9117,9128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:49.006 00:32:49.006 Run status group 0 (all jobs): 00:32:49.006 READ: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=35.6MiB (37.3MB), run=2011-2011msec 00:32:49.006 WRITE: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=35.7MiB (37.4MB), run=2011-2011msec 00:32:49.265 ----------------------------------------------------- 00:32:49.265 Suppressions used: 00:32:49.265 count bytes template 00:32:49.265 1 58 /usr/src/fio/parse.c 00:32:49.265 1 8 libtcmalloc_minimal.so 00:32:49.265 ----------------------------------------------------- 00:32:49.265 00:32:49.265 00:06:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:49.557 00:06:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=49e5ef80-9957-41b3-b42a-79ef7981cc9a 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 49e5ef80-9957-41b3-b42a-79ef7981cc9a 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=49e5ef80-9957-41b3-b42a-79ef7981cc9a 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:32:50.957 00:06:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:50.957 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:50.957 { 00:32:50.957 "uuid": "173eea91-147c-4f5c-be9a-f8803b8c3a74", 00:32:50.957 "name": "lvs_0", 00:32:50.957 "base_bdev": "Nvme0n1", 00:32:50.957 "total_data_clusters": 930, 00:32:50.957 "free_clusters": 0, 00:32:50.957 "block_size": 512, 00:32:50.957 "cluster_size": 1073741824 00:32:50.957 }, 00:32:50.957 { 00:32:50.957 "uuid": "49e5ef80-9957-41b3-b42a-79ef7981cc9a", 00:32:50.957 "name": "lvs_n_0", 00:32:50.957 "base_bdev": "2ea426d8-2962-4335-b8d0-dfeb0d91cfdc", 00:32:50.957 "total_data_clusters": 237847, 00:32:50.957 "free_clusters": 237847, 00:32:50.957 "block_size": 512, 00:32:50.957 "cluster_size": 4194304 00:32:50.957 } 00:32:50.957 ]' 00:32:50.957 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="49e5ef80-9957-41b3-b42a-79ef7981cc9a") .free_clusters' 00:32:50.957 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=237847 00:32:50.957 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="49e5ef80-9957-41b3-b42a-79ef7981cc9a") .cluster_size' 00:32:51.215 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:51.215 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=951388 00:32:51.215 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 951388 00:32:51.215 951388 00:32:51.215 00:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:52.148 bb7a685e-63fc-408b-913a-1376c0be7c1d 00:32:52.148 00:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:52.406 00:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:52.971 00:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:52.971 00:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:53.228 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:53.228 fio-3.35 00:32:53.228 Starting 1 thread 00:32:55.762 00:32:55.762 test: (groupid=0, jobs=1): err= 0: pid=3591664: Sun Nov 10 00:06:21 2024 00:32:55.762 read: IOPS=4316, BW=16.9MiB/s (17.7MB/s)(33.9MiB/2010msec) 00:32:55.762 slat (usec): min=3, max=220, avg= 3.84, stdev= 3.35 00:32:55.762 clat (usec): min=6310, max=28148, avg=16094.00, stdev=1709.17 00:32:55.762 lat (usec): min=6318, max=28151, avg=16097.84, stdev=1708.97 00:32:55.762 clat percentiles (usec): 00:32:55.762 | 1.00th=[12518], 5.00th=[13698], 10.00th=[14222], 20.00th=[14877], 00:32:55.762 | 30.00th=[15270], 40.00th=[15664], 50.00th=[16057], 60.00th=[16450], 00:32:55.762 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:32:55.762 | 99.00th=[21890], 99.50th=[24249], 99.90th=[26870], 99.95th=[26870], 00:32:55.762 | 99.99th=[28181] 00:32:55.762 bw ( KiB/s): min=15648, max=18032, per=99.70%, avg=17214.00, stdev=1064.90, samples=4 00:32:55.762 iops : min= 3912, max= 4508, avg=4303.50, stdev=266.22, samples=4 00:32:55.762 write: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(33.9MiB/2010msec); 0 zone resets 00:32:55.762 slat (usec): min=3, max=171, avg= 4.03, stdev= 2.42 00:32:55.762 clat (usec): min=3186, max=24070, avg=13287.68, stdev=1422.67 00:32:55.762 lat (usec): min=3196, max=24074, avg=13291.70, stdev=1422.60 00:32:55.762 clat percentiles (usec): 00:32:55.762 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11731], 20.00th=[12387], 00:32:55.762 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:32:55.762 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:32:55.762 | 99.00th=[17957], 99.50th=[20579], 99.90th=[23462], 99.95th=[23725], 00:32:55.762 | 99.99th=[23987] 00:32:55.762 bw ( KiB/s): min=16600, max=17600, per=99.83%, avg=17222.00, stdev=431.87, samples=4 00:32:55.762 iops : min= 4150, max= 4400, avg=4305.50, stdev=107.97, samples=4 00:32:55.762 lat (msec) : 4=0.02%, 10=0.48%, 20=98.49%, 50=1.01% 00:32:55.762 cpu : usr=65.75%, sys=32.90%, ctx=80, majf=0, minf=1543 00:32:55.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:55.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.762 issued rwts: total=8676,8669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.762 00:32:55.762 Run status group 0 (all jobs): 00:32:55.762 READ: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.5MB), run=2010-2010msec 00:32:55.762 WRITE: bw=16.8MiB/s (17.7MB/s), 16.8MiB/s-16.8MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.5MB), run=2010-2010msec 00:32:56.019 ----------------------------------------------------- 00:32:56.019 Suppressions used: 00:32:56.019 count bytes template 00:32:56.019 1 58 /usr/src/fio/parse.c 00:32:56.019 1 8 libtcmalloc_minimal.so 00:32:56.019 ----------------------------------------------------- 00:32:56.019 00:32:56.019 00:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:56.277 00:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:56.277 00:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:01.540 00:06:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:01.540 00:06:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:04.072 00:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:04.072 00:06:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.973 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.973 rmmod nvme_tcp 00:33:06.230 rmmod nvme_fabrics 00:33:06.230 rmmod nvme_keyring 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3587977 ']' 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3587977 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3587977 ']' 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3587977 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3587977 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3587977' 00:33:06.230 killing process with pid 3587977 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3587977 00:33:06.230 00:06:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3587977 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.606 00:06:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.509 00:33:09.509 real 0m42.511s 00:33:09.509 user 2m42.088s 00:33:09.509 sys 0m8.529s 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.509 ************************************ 00:33:09.509 END TEST nvmf_fio_host 00:33:09.509 ************************************ 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.509 ************************************ 00:33:09.509 START TEST nvmf_failover 00:33:09.509 ************************************ 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:09.509 * Looking for test storage... 00:33:09.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:33:09.509 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:09.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.768 --rc genhtml_branch_coverage=1 00:33:09.768 --rc genhtml_function_coverage=1 00:33:09.768 --rc genhtml_legend=1 00:33:09.768 --rc geninfo_all_blocks=1 00:33:09.768 --rc geninfo_unexecuted_blocks=1 00:33:09.768 00:33:09.768 ' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:09.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.768 --rc genhtml_branch_coverage=1 00:33:09.768 --rc genhtml_function_coverage=1 00:33:09.768 --rc genhtml_legend=1 00:33:09.768 --rc geninfo_all_blocks=1 00:33:09.768 --rc geninfo_unexecuted_blocks=1 00:33:09.768 00:33:09.768 ' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:09.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.768 --rc genhtml_branch_coverage=1 00:33:09.768 --rc genhtml_function_coverage=1 00:33:09.768 --rc genhtml_legend=1 00:33:09.768 --rc geninfo_all_blocks=1 00:33:09.768 --rc geninfo_unexecuted_blocks=1 00:33:09.768 00:33:09.768 ' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:09.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.768 --rc genhtml_branch_coverage=1 00:33:09.768 --rc genhtml_function_coverage=1 00:33:09.768 --rc genhtml_legend=1 00:33:09.768 --rc geninfo_all_blocks=1 00:33:09.768 --rc geninfo_unexecuted_blocks=1 00:33:09.768 00:33:09.768 ' 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.768 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.769 00:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.301 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:12.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:12.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:12.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:12.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:33:12.302 00:33:12.302 --- 10.0.0.2 ping statistics --- 00:33:12.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.302 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:33:12.302 00:33:12.302 --- 10.0.0.1 ping statistics --- 00:33:12.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.302 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.302 00:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3595190 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3595190 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3595190 ']' 00:33:12.302 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.303 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:12.303 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.303 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:12.303 00:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:12.303 [2024-11-10 00:06:38.120112] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:33:12.303 [2024-11-10 00:06:38.120277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.303 [2024-11-10 00:06:38.275233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:12.303 [2024-11-10 00:06:38.415306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.303 [2024-11-10 00:06:38.415386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.303 [2024-11-10 00:06:38.415416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.303 [2024-11-10 00:06:38.415441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.303 [2024-11-10 00:06:38.415461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.303 [2024-11-10 00:06:38.418170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.303 [2024-11-10 00:06:38.418268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.303 [2024-11-10 00:06:38.418272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:13.245 [2024-11-10 00:06:39.348194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.245 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:13.815 Malloc0 00:33:13.815 00:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:13.815 00:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.383 00:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.383 [2024-11-10 00:06:40.537167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.383 00:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:14.641 [2024-11-10 00:06:40.822133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:14.899 00:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:14.899 [2024-11-10 00:06:41.087144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:15.157 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3595605 00:33:15.157 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:15.157 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:15.157 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3595605 /var/tmp/bdevperf.sock 00:33:15.157 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3595605 ']' 00:33:15.158 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:15.158 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:15.158 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:15.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:15.158 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:15.158 00:06:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:16.093 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:16.093 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:33:16.093 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:16.350 NVMe0n1 00:33:16.351 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:16.916 00:33:16.916 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3595753 00:33:16.916 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:16.916 00:06:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:17.851 00:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.110 [2024-11-10 00:06:44.181513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.181987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 [2024-11-10 00:06:44.182309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:18.110 00:06:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:21.397 00:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:21.655 00:33:21.655 00:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:21.913 [2024-11-10 00:06:47.956017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 [2024-11-10 00:06:47.956476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:21.913 00:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:25.223 00:06:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.223 [2024-11-10 00:06:51.272840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.223 00:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:26.156 00:06:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:26.415 [2024-11-10 00:06:52.551014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 [2024-11-10 00:06:52.551363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:26.415 00:06:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3595753 00:33:33.001 { 00:33:33.001 "results": [ 00:33:33.001 { 00:33:33.001 "job": "NVMe0n1", 00:33:33.001 "core_mask": "0x1", 00:33:33.001 "workload": "verify", 00:33:33.001 "status": "finished", 00:33:33.001 "verify_range": { 00:33:33.001 "start": 0, 00:33:33.001 "length": 16384 00:33:33.001 }, 00:33:33.001 "queue_depth": 128, 00:33:33.001 "io_size": 4096, 00:33:33.001 "runtime": 15.014952, 00:33:33.001 "iops": 6027.192094919784, 00:33:33.001 "mibps": 23.543719120780406, 00:33:33.001 "io_failed": 13812, 00:33:33.001 "io_timeout": 0, 00:33:33.001 "avg_latency_us": 18390.902069032123, 00:33:33.001 "min_latency_us": 1092.2666666666667, 00:33:33.001 "max_latency_us": 23301.68888888889 00:33:33.001 } 00:33:33.001 ], 00:33:33.001 "core_count": 1 00:33:33.001 } 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3595605 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3595605 ']' 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3595605 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3595605 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3595605' 00:33:33.001 killing process with pid 3595605 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3595605 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3595605 00:33:33.001 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:33.001 [2024-11-10 00:06:41.193787] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:33:33.001 [2024-11-10 00:06:41.193970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595605 ] 00:33:33.001 [2024-11-10 00:06:41.340492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.001 [2024-11-10 00:06:41.469055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.001 Running I/O for 15 seconds... 00:33:33.001 6287.00 IOPS, 24.56 MiB/s [2024-11-09T23:06:59.202Z] [2024-11-10 00:06:44.183494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.001 [2024-11-10 00:06:44.183561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.001 [2024-11-10 00:06:44.183644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.001 [2024-11-10 00:06:44.183670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.001 [2024-11-10 00:06:44.183696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.001 [2024-11-10 00:06:44.183718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.001 [2024-11-10 00:06:44.183743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.183765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.183789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.183810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.183833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.183854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.183879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.183915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.183947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.183968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.183991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.002 [2024-11-10 00:06:44.184732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.184963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.184984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.002 [2024-11-10 00:06:44.185382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.002 [2024-11-10 00:06:44.185403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.185951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.185972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.186968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.186991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.187012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.187057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.187081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.003 [2024-11-10 00:06:44.187103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.003 [2024-11-10 00:06:44.187126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.187888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.187949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.187972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.187994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.188039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.188083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.188128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.188173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.188218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.004 [2024-11-10 00:06:44.188263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.004 [2024-11-10 00:06:44.188793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.004 [2024-11-10 00:06:44.188816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.188839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.188863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.188885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.188924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.188945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.188968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.188989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:44.189699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.005 [2024-11-10 00:06:44.189776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58120 len:8 PRP1 0x0 PRP2 0x0 00:33:33.005 [2024-11-10 00:06:44.189799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.189828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.005 [2024-11-10 00:06:44.189848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.005 [2024-11-10 00:06:44.189869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58128 len:8 PRP1 0x0 PRP2 0x0 00:33:33.005 [2024-11-10 00:06:44.189906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.190210] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:33.005 [2024-11-10 00:06:44.190287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.005 [2024-11-10 00:06:44.190316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.190341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.005 [2024-11-10 00:06:44.190363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.190385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.005 [2024-11-10 00:06:44.190406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.190428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.005 [2024-11-10 00:06:44.190449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:44.190470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:33.005 [2024-11-10 00:06:44.190546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:33.005 [2024-11-10 00:06:44.194314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:33.005 [2024-11-10 00:06:44.311481] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:33.005 5830.50 IOPS, 22.78 MiB/s [2024-11-09T23:06:59.206Z] 5973.33 IOPS, 23.33 MiB/s [2024-11-09T23:06:59.206Z] 6044.75 IOPS, 23.61 MiB/s [2024-11-09T23:06:59.206Z] [2024-11-10 00:06:47.957474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:47.957543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:47.957636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:47.957688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:47.957716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:47.957739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:47.957765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:47.957787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:47.957810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:47.957832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.005 [2024-11-10 00:06:47.957854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.005 [2024-11-10 00:06:47.957876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.957900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.957922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.957945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.957982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.958996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.959017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.959040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.959061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.959083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.959103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.959125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.959146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.959168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.959189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.959211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.006 [2024-11-10 00:06:47.959232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.006 [2024-11-10 00:06:47.959255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.007 [2024-11-10 00:06:47.959783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.959829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.959873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.959931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.959975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.959998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.007 [2024-11-10 00:06:47.960800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.007 [2024-11-10 00:06:47.960824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.960846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.960869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.960890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.960929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.960949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.960972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.960994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.961957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.961981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.008 [2024-11-10 00:06:47.962451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.008 [2024-11-10 00:06:47.962473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.009 [2024-11-10 00:06:47.962838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.962932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9008 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.962953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.962980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.962999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9016 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9032 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9040 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9048 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9064 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9072 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9080 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9096 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9104 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.963930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.963948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9112 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.963966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.963985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.964001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8472 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.964037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.964055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.009 [2024-11-10 00:06:47.964071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.009 [2024-11-10 00:06:47.964088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:8 PRP1 0x0 PRP2 0x0 00:33:33.009 [2024-11-10 00:06:47.964107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.009 [2024-11-10 00:06:47.964376] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:33.010 [2024-11-10 00:06:47.964448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.010 [2024-11-10 00:06:47.964476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:47.964500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.010 [2024-11-10 00:06:47.964521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:47.964542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.010 [2024-11-10 00:06:47.964567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:47.964598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.010 [2024-11-10 00:06:47.964622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:47.964643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:33.010 [2024-11-10 00:06:47.964721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:33.010 [2024-11-10 00:06:47.968500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:33.010 5933.80 IOPS, 23.18 MiB/s [2024-11-09T23:06:59.211Z] [2024-11-10 00:06:48.171513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:33.010 5833.33 IOPS, 22.79 MiB/s [2024-11-09T23:06:59.211Z] 5883.71 IOPS, 22.98 MiB/s [2024-11-09T23:06:59.211Z] 5921.88 IOPS, 23.13 MiB/s [2024-11-09T23:06:59.211Z] 5964.89 IOPS, 23.30 MiB/s [2024-11-09T23:06:59.211Z] [2024-11-10 00:06:52.552136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.552945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.552968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.552989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.553032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.553077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.553121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.553164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.553213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.010 [2024-11-10 00:06:52.553257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.553346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.553390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.553433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.010 [2024-11-10 00:06:52.553457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.010 [2024-11-10 00:06:52.553478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.011 [2024-11-10 00:06:52.553684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.011 [2024-11-10 00:06:52.553733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.553977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.553998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.011 [2024-11-10 00:06:52.554968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.011 [2024-11-10 00:06:52.554992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.555960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.555981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.012 [2024-11-10 00:06:52.556642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.012 [2024-11-10 00:06:52.556666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.013 [2024-11-10 00:06:52.556688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.556712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.013 [2024-11-10 00:06:52.556734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.556758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.013 [2024-11-10 00:06:52.556780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.556803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.556826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.556852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.556879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.556920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.556965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.556987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:33.013 [2024-11-10 00:06:52.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.557976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.557998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.558044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.558093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.558138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.558183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.558229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.013 [2024-11-10 00:06:52.558273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.013 [2024-11-10 00:06:52.558319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:33.013 [2024-11-10 00:06:52.558342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:33.013 [2024-11-10 00:06:52.558362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18936 len:8 PRP1 0x0 PRP2 0x0 00:33:33.014 [2024-11-10 00:06:52.558383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.014 [2024-11-10 00:06:52.558682] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:33.014 [2024-11-10 00:06:52.558742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.014 [2024-11-10 00:06:52.558769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.014 [2024-11-10 00:06:52.558793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.014 [2024-11-10 00:06:52.558813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.014 [2024-11-10 00:06:52.558834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.014 [2024-11-10 00:06:52.558854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.014 [2024-11-10 00:06:52.558876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.014 [2024-11-10 00:06:52.558896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.014 [2024-11-10 00:06:52.558917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:33.014 [2024-11-10 00:06:52.558999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:33.014 [2024-11-10 00:06:52.562741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:33.014 [2024-11-10 00:06:52.637853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:33.014 5934.40 IOPS, 23.18 MiB/s [2024-11-09T23:06:59.215Z] 5964.36 IOPS, 23.30 MiB/s [2024-11-09T23:06:59.215Z] 5988.00 IOPS, 23.39 MiB/s [2024-11-09T23:06:59.215Z] 6001.92 IOPS, 23.45 MiB/s [2024-11-09T23:06:59.215Z] 6014.86 IOPS, 23.50 MiB/s 00:33:33.014 Latency(us) 00:33:33.014 [2024-11-09T23:06:59.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.014 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:33.014 Verification LBA range: start 0x0 length 0x4000 00:33:33.014 NVMe0n1 : 15.01 6027.19 23.54 919.88 0.00 18390.90 1092.27 23301.69 00:33:33.014 [2024-11-09T23:06:59.215Z] =================================================================================================================== 00:33:33.014 [2024-11-09T23:06:59.215Z] Total : 6027.19 23.54 919.88 0.00 18390.90 1092.27 23301.69 00:33:33.014 Received shutdown signal, test time was about 15.000000 seconds 00:33:33.014 00:33:33.014 Latency(us) 00:33:33.014 [2024-11-09T23:06:59.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.014 [2024-11-09T23:06:59.215Z] =================================================================================================================== 00:33:33.014 [2024-11-09T23:06:59.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3597688 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3597688 /var/tmp/bdevperf.sock 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3597688 ']' 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:33.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:33.014 00:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:33.947 00:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:33.947 00:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:33:33.947 00:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:34.204 [2024-11-10 00:07:00.250059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:34.204 00:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:34.462 [2024-11-10 00:07:00.522996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:34.462 00:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.028 NVMe0n1 00:33:35.028 00:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.593 00:33:35.593 00:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:35.851 00:33:35.851 00:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:35.851 00:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:36.109 00:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.367 00:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:39.656 00:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:39.656 00:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:39.656 00:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3598511 00:33:39.656 00:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:39.656 00:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3598511 00:33:41.028 { 00:33:41.028 "results": [ 00:33:41.028 { 00:33:41.028 "job": "NVMe0n1", 00:33:41.028 "core_mask": "0x1", 00:33:41.028 "workload": "verify", 00:33:41.028 "status": "finished", 00:33:41.028 "verify_range": { 00:33:41.028 "start": 0, 00:33:41.028 "length": 16384 00:33:41.028 }, 00:33:41.028 "queue_depth": 128, 00:33:41.028 "io_size": 4096, 00:33:41.028 "runtime": 1.011132, 00:33:41.028 "iops": 6097.12678463346, 00:33:41.028 "mibps": 23.816901502474455, 00:33:41.028 "io_failed": 0, 00:33:41.028 "io_timeout": 0, 00:33:41.028 "avg_latency_us": 20880.795033372386, 00:33:41.028 "min_latency_us": 1650.5362962962963, 00:33:41.028 "max_latency_us": 20388.977777777778 00:33:41.028 } 00:33:41.028 ], 00:33:41.028 "core_count": 1 00:33:41.028 } 00:33:41.028 00:07:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:41.028 [2024-11-10 00:06:59.039444] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:33:41.028 [2024-11-10 00:06:59.039627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597688 ] 00:33:41.028 [2024-11-10 00:06:59.172286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.028 [2024-11-10 00:06:59.297887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.028 [2024-11-10 00:07:02.418670] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:41.028 [2024-11-10 00:07:02.418810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.028 [2024-11-10 00:07:02.418850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.028 [2024-11-10 00:07:02.418882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.028 [2024-11-10 00:07:02.418904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.028 [2024-11-10 00:07:02.418925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.028 [2024-11-10 00:07:02.418948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.028 [2024-11-10 00:07:02.418969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.028 [2024-11-10 00:07:02.418992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.028 [2024-11-10 00:07:02.419014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:41.028 [2024-11-10 00:07:02.419097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:41.028 [2024-11-10 00:07:02.419156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:41.029 [2024-11-10 00:07:02.429039] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:41.029 Running I/O for 1 seconds... 00:33:41.029 6037.00 IOPS, 23.58 MiB/s 00:33:41.029 Latency(us) 00:33:41.029 [2024-11-09T23:07:07.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.029 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:41.029 Verification LBA range: start 0x0 length 0x4000 00:33:41.029 NVMe0n1 : 1.01 6097.13 23.82 0.00 0.00 20880.80 1650.54 20388.98 00:33:41.029 [2024-11-09T23:07:07.230Z] =================================================================================================================== 00:33:41.029 [2024-11-09T23:07:07.230Z] Total : 6097.13 23.82 0.00 0.00 20880.80 1650.54 20388.98 00:33:41.029 00:07:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:41.029 00:07:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:41.029 00:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:41.286 00:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:41.287 00:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:41.545 00:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.109 00:07:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3597688 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3597688 ']' 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3597688 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3597688 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3597688' 00:33:45.393 killing process with pid 3597688 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3597688 00:33:45.393 00:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3597688 00:33:45.958 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:45.958 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.523 rmmod nvme_tcp 00:33:46.523 rmmod nvme_fabrics 00:33:46.523 rmmod nvme_keyring 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3595190 ']' 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3595190 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3595190 ']' 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3595190 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3595190 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3595190' 00:33:46.523 killing process with pid 3595190 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3595190 00:33:46.523 00:07:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3595190 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.896 00:07:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.800 00:33:49.800 real 0m40.234s 00:33:49.800 user 2m21.244s 00:33:49.800 sys 0m6.319s 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:49.800 ************************************ 00:33:49.800 END TEST nvmf_failover 00:33:49.800 ************************************ 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.800 ************************************ 00:33:49.800 START TEST nvmf_host_discovery 00:33:49.800 ************************************ 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:49.800 * Looking for test storage... 00:33:49.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:33:49.800 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.059 --rc genhtml_branch_coverage=1 00:33:50.059 --rc genhtml_function_coverage=1 00:33:50.059 --rc genhtml_legend=1 00:33:50.059 --rc geninfo_all_blocks=1 00:33:50.059 --rc geninfo_unexecuted_blocks=1 00:33:50.059 00:33:50.059 ' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.059 --rc genhtml_branch_coverage=1 00:33:50.059 --rc genhtml_function_coverage=1 00:33:50.059 --rc genhtml_legend=1 00:33:50.059 --rc geninfo_all_blocks=1 00:33:50.059 --rc geninfo_unexecuted_blocks=1 00:33:50.059 00:33:50.059 ' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.059 --rc genhtml_branch_coverage=1 00:33:50.059 --rc genhtml_function_coverage=1 00:33:50.059 --rc genhtml_legend=1 00:33:50.059 --rc geninfo_all_blocks=1 00:33:50.059 --rc geninfo_unexecuted_blocks=1 00:33:50.059 00:33:50.059 ' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.059 --rc genhtml_branch_coverage=1 00:33:50.059 --rc genhtml_function_coverage=1 00:33:50.059 --rc genhtml_legend=1 00:33:50.059 --rc geninfo_all_blocks=1 00:33:50.059 --rc geninfo_unexecuted_blocks=1 00:33:50.059 00:33:50.059 ' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.059 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.060 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:51.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.966 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:51.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:51.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:51.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.967 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:52.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:33:52.225 00:33:52.225 --- 10.0.0.2 ping statistics --- 00:33:52.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.225 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:33:52.225 00:33:52.225 --- 10.0.0.1 ping statistics --- 00:33:52.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.225 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3601329 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3601329 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3601329 ']' 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:52.225 00:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.225 [2024-11-10 00:07:18.316616] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:33:52.226 [2024-11-10 00:07:18.316778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.484 [2024-11-10 00:07:18.480600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.484 [2024-11-10 00:07:18.619190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.484 [2024-11-10 00:07:18.619284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.484 [2024-11-10 00:07:18.619316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.484 [2024-11-10 00:07:18.619342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.484 [2024-11-10 00:07:18.619363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.484 [2024-11-10 00:07:18.621033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.107 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.107 [2024-11-10 00:07:19.287383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.366 [2024-11-10 00:07:19.295525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.366 null0 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.366 null1 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3601456 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3601456 /tmp/host.sock 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3601456 ']' 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:53.366 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:53.366 00:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.366 [2024-11-10 00:07:19.420299] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:33:53.366 [2024-11-10 00:07:19.420455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601456 ] 00:33:53.366 [2024-11-10 00:07:19.554407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.630 [2024-11-10 00:07:19.676886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.199 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.457 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 [2024-11-10 00:07:20.671486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:33:54.716 00:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:33:55.660 [2024-11-10 00:07:21.489770] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:55.660 [2024-11-10 00:07:21.489827] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:55.660 [2024-11-10 00:07:21.489886] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:55.660 [2024-11-10 00:07:21.576175] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:55.660 [2024-11-10 00:07:21.676570] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:55.660 [2024-11-10 00:07:21.678273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:55.660 [2024-11-10 00:07:21.680523] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:55.660 [2024-11-10 00:07:21.680556] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:55.660 [2024-11-10 00:07:21.687226] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.660 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:55.920 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.921 00:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.921 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:55.921 [2024-11-10 00:07:22.112847] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:55.921 [2024-11-10 00:07:22.118623] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.180 [2024-11-10 00:07:22.185086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:56.180 [2024-11-10 00:07:22.186059] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:56.180 [2024-11-10 00:07:22.186112] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:56.180 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.181 [2024-11-10 00:07:22.272724] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:56.181 00:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:33:56.440 [2024-11-10 00:07:22.534708] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:56.440 [2024-11-10 00:07:22.534836] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:56.440 [2024-11-10 00:07:22.534865] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:56.440 [2024-11-10 00:07:22.534883] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.391 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.391 [2024-11-10 00:07:23.417645] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:57.391 [2024-11-10 00:07:23.417719] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:57.391 [2024-11-10 00:07:23.418975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.392 [2024-11-10 00:07:23.419022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.392 [2024-11-10 00:07:23.419049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.392 [2024-11-10 00:07:23.419071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.392 [2024-11-10 00:07:23.419102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.392 [2024-11-10 00:07:23.419141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.392 [2024-11-10 00:07:23.419173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:57.392 [2024-11-10 00:07:23.419196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.392 [2024-11-10 00:07:23.419219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:57.392 [2024-11-10 00:07:23.428952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.392 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.392 [2024-11-10 00:07:23.439001] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.392 [2024-11-10 00:07:23.439049] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.392 [2024-11-10 00:07:23.439071] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.392 [2024-11-10 00:07:23.439088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.392 [2024-11-10 00:07:23.439161] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.392 [2024-11-10 00:07:23.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.392 [2024-11-10 00:07:23.439433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.392 [2024-11-10 00:07:23.439462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.392 [2024-11-10 00:07:23.439502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.392 [2024-11-10 00:07:23.439541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.392 [2024-11-10 00:07:23.439567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.392 [2024-11-10 00:07:23.439636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.392 [2024-11-10 00:07:23.439660] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.392 [2024-11-10 00:07:23.439677] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.392 [2024-11-10 00:07:23.439691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.392 [2024-11-10 00:07:23.449203] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.392 [2024-11-10 00:07:23.449241] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.392 [2024-11-10 00:07:23.449259] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.392 [2024-11-10 00:07:23.449273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.392 [2024-11-10 00:07:23.449313] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.392 [2024-11-10 00:07:23.449472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.392 [2024-11-10 00:07:23.449513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.392 [2024-11-10 00:07:23.449540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.392 [2024-11-10 00:07:23.449597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.392 [2024-11-10 00:07:23.449650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.392 [2024-11-10 00:07:23.449672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.392 [2024-11-10 00:07:23.449708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.392 [2024-11-10 00:07:23.449726] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.392 [2024-11-10 00:07:23.449745] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.392 [2024-11-10 00:07:23.449758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.392 [2024-11-10 00:07:23.459356] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.392 [2024-11-10 00:07:23.459393] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.392 [2024-11-10 00:07:23.459411] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.392 [2024-11-10 00:07:23.459425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.392 [2024-11-10 00:07:23.459478] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.392 [2024-11-10 00:07:23.459658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.392 [2024-11-10 00:07:23.459694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.392 [2024-11-10 00:07:23.459719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.392 [2024-11-10 00:07:23.459752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.392 [2024-11-10 00:07:23.459783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.392 [2024-11-10 00:07:23.459805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.392 [2024-11-10 00:07:23.459825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.392 [2024-11-10 00:07:23.459843] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.393 [2024-11-10 00:07:23.459857] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.393 [2024-11-10 00:07:23.459899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:57.393 [2024-11-10 00:07:23.469518] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.393 [2024-11-10 00:07:23.469567] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.393 [2024-11-10 00:07:23.469594] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.469611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.393 [2024-11-10 00:07:23.469680] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.469828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.393 [2024-11-10 00:07:23.469867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.393 [2024-11-10 00:07:23.469919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.393 [2024-11-10 00:07:23.469956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.393 [2024-11-10 00:07:23.469992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.393 [2024-11-10 00:07:23.470016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.393 [2024-11-10 00:07:23.470038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.393 [2024-11-10 00:07:23.470058] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.393 [2024-11-10 00:07:23.470073] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.393 [2024-11-10 00:07:23.470087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.393 [2024-11-10 00:07:23.479721] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.393 [2024-11-10 00:07:23.479756] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.393 [2024-11-10 00:07:23.479772] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.479785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.393 [2024-11-10 00:07:23.479823] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.480007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.393 [2024-11-10 00:07:23.480055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.393 [2024-11-10 00:07:23.480097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.393 [2024-11-10 00:07:23.480149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.393 [2024-11-10 00:07:23.480187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.393 [2024-11-10 00:07:23.480211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.393 [2024-11-10 00:07:23.480233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.393 [2024-11-10 00:07:23.480253] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.393 [2024-11-10 00:07:23.480269] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.393 [2024-11-10 00:07:23.480282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.393 [2024-11-10 00:07:23.489863] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.393 [2024-11-10 00:07:23.489904] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.393 [2024-11-10 00:07:23.489936] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.489948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.393 [2024-11-10 00:07:23.490023] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.490160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.393 [2024-11-10 00:07:23.490203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.393 [2024-11-10 00:07:23.490230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.393 [2024-11-10 00:07:23.490267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.393 [2024-11-10 00:07:23.490302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.393 [2024-11-10 00:07:23.490327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.393 [2024-11-10 00:07:23.490351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.393 [2024-11-10 00:07:23.490371] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.393 [2024-11-10 00:07:23.490388] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.393 [2024-11-10 00:07:23.490402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.393 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.393 [2024-11-10 00:07:23.500062] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.393 [2024-11-10 00:07:23.500100] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.393 [2024-11-10 00:07:23.500117] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.500132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.393 [2024-11-10 00:07:23.500173] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.393 [2024-11-10 00:07:23.500417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.393 [2024-11-10 00:07:23.500472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.393 [2024-11-10 00:07:23.500498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.394 [2024-11-10 00:07:23.500549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.394 [2024-11-10 00:07:23.500611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.394 [2024-11-10 00:07:23.500663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.394 [2024-11-10 00:07:23.500684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.394 [2024-11-10 00:07:23.500717] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.394 [2024-11-10 00:07:23.500735] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.394 [2024-11-10 00:07:23.500748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:57.394 [2024-11-10 00:07:23.510216] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.394 [2024-11-10 00:07:23.510256] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.394 [2024-11-10 00:07:23.510275] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.394 [2024-11-10 00:07:23.510290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.394 [2024-11-10 00:07:23.510332] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.394 [2024-11-10 00:07:23.510533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.394 [2024-11-10 00:07:23.510576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.394 [2024-11-10 00:07:23.510614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.394 [2024-11-10 00:07:23.510680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.394 [2024-11-10 00:07:23.510728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.394 [2024-11-10 00:07:23.510753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:57.394 [2024-11-10 00:07:23.510773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.394 [2024-11-10 00:07:23.510795] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.394 [2024-11-10 00:07:23.510810] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.394 [2024-11-10 00:07:23.510823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.394 [2024-11-10 00:07:23.520374] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.394 [2024-11-10 00:07:23.520409] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.394 [2024-11-10 00:07:23.520424] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.394 [2024-11-10 00:07:23.520436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.394 [2024-11-10 00:07:23.520474] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.394 [2024-11-10 00:07:23.520674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.394 [2024-11-10 00:07:23.520722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.394 [2024-11-10 00:07:23.520746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.394 [2024-11-10 00:07:23.520788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.394 [2024-11-10 00:07:23.520837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.394 [2024-11-10 00:07:23.520863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.394 [2024-11-10 00:07:23.520884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.394 [2024-11-10 00:07:23.520902] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.394 [2024-11-10 00:07:23.520916] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.394 [2024-11-10 00:07:23.520928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.394 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.394 [2024-11-10 00:07:23.530515] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.394 [2024-11-10 00:07:23.530546] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.394 [2024-11-10 00:07:23.530577] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.394 [2024-11-10 00:07:23.530598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.394 [2024-11-10 00:07:23.530637] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.394 [2024-11-10 00:07:23.530774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.394 [2024-11-10 00:07:23.530813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.394 [2024-11-10 00:07:23.530838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.394 [2024-11-10 00:07:23.530872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.394 [2024-11-10 00:07:23.530919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.394 [2024-11-10 00:07:23.530946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.394 [2024-11-10 00:07:23.530967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.394 [2024-11-10 00:07:23.530986] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.394 [2024-11-10 00:07:23.531000] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.394 [2024-11-10 00:07:23.531019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.394 [2024-11-10 00:07:23.540678] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:57.394 [2024-11-10 00:07:23.540711] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:57.394 [2024-11-10 00:07:23.540726] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.394 [2024-11-10 00:07:23.540738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.395 [2024-11-10 00:07:23.540785] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.395 [2024-11-10 00:07:23.540952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.395 [2024-11-10 00:07:23.540989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:57.395 [2024-11-10 00:07:23.541014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:57.395 [2024-11-10 00:07:23.541046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:57.395 [2024-11-10 00:07:23.541094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.395 [2024-11-10 00:07:23.541120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.395 [2024-11-10 00:07:23.541140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.395 [2024-11-10 00:07:23.541159] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.395 [2024-11-10 00:07:23.541173] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.395 [2024-11-10 00:07:23.541185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.395 [2024-11-10 00:07:23.546181] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:57.395 [2024-11-10 00:07:23.546227] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:57.395 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:57.395 00:07:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.780 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.781 00:07:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.720 [2024-11-10 00:07:25.790033] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:59.720 [2024-11-10 00:07:25.790099] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:59.720 [2024-11-10 00:07:25.790153] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:59.720 [2024-11-10 00:07:25.876429] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:59.991 [2024-11-10 00:07:25.982576] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:59.991 [2024-11-10 00:07:25.984182] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:59.991 [2024-11-10 00:07:25.987249] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:59.991 [2024-11-10 00:07:25.987319] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:59.992 [2024-11-10 00:07:25.990110] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.992 request: 00:33:59.992 { 00:33:59.992 "name": "nvme", 00:33:59.992 "trtype": "tcp", 00:33:59.992 "traddr": "10.0.0.2", 00:33:59.992 "adrfam": "ipv4", 00:33:59.992 "trsvcid": "8009", 00:33:59.992 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:59.992 "wait_for_attach": true, 00:33:59.992 "method": "bdev_nvme_start_discovery", 00:33:59.992 "req_id": 1 00:33:59.992 } 00:33:59.992 Got JSON-RPC error response 00:33:59.992 response: 00:33:59.992 { 00:33:59.992 "code": -17, 00:33:59.992 "message": "File exists" 00:33:59.992 } 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:59.992 00:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.992 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.992 request: 00:33:59.992 { 00:33:59.992 "name": "nvme_second", 00:33:59.993 "trtype": "tcp", 00:33:59.993 "traddr": "10.0.0.2", 00:33:59.993 "adrfam": "ipv4", 00:33:59.993 "trsvcid": "8009", 00:33:59.993 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:59.993 "wait_for_attach": true, 00:33:59.993 "method": "bdev_nvme_start_discovery", 00:33:59.993 "req_id": 1 00:33:59.993 } 00:33:59.993 Got JSON-RPC error response 00:33:59.993 response: 00:33:59.993 { 00:33:59.993 "code": -17, 00:33:59.993 "message": "File exists" 00:33:59.993 } 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.993 00:07:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.375 [2024-11-10 00:07:27.182991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.375 [2024-11-10 00:07:27.183057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:34:01.375 [2024-11-10 00:07:27.183137] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:01.375 [2024-11-10 00:07:27.183166] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:01.375 [2024-11-10 00:07:27.183189] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:02.323 [2024-11-10 00:07:28.185349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.323 [2024-11-10 00:07:28.185418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:34:02.323 [2024-11-10 00:07:28.185486] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:02.323 [2024-11-10 00:07:28.185509] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:02.323 [2024-11-10 00:07:28.185529] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:03.263 [2024-11-10 00:07:29.187512] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:03.263 request: 00:34:03.263 { 00:34:03.263 "name": "nvme_second", 00:34:03.263 "trtype": "tcp", 00:34:03.263 "traddr": "10.0.0.2", 00:34:03.263 "adrfam": "ipv4", 00:34:03.263 "trsvcid": "8010", 00:34:03.263 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:03.263 "wait_for_attach": false, 00:34:03.263 "attach_timeout_ms": 3000, 00:34:03.263 "method": "bdev_nvme_start_discovery", 00:34:03.263 "req_id": 1 00:34:03.263 } 00:34:03.263 Got JSON-RPC error response 00:34:03.263 response: 00:34:03.263 { 00:34:03.263 "code": -110, 00:34:03.263 "message": "Connection timed out" 00:34:03.263 } 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3601456 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.263 rmmod nvme_tcp 00:34:03.263 rmmod nvme_fabrics 00:34:03.263 rmmod nvme_keyring 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3601329 ']' 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3601329 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3601329 ']' 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3601329 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3601329 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3601329' 00:34:03.263 killing process with pid 3601329 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3601329 00:34:03.263 00:07:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3601329 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.643 00:07:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.547 00:34:06.547 real 0m16.613s 00:34:06.547 user 0m25.052s 00:34:06.547 sys 0m3.223s 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.547 ************************************ 00:34:06.547 END TEST nvmf_host_discovery 00:34:06.547 ************************************ 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.547 ************************************ 00:34:06.547 START TEST nvmf_host_multipath_status 00:34:06.547 ************************************ 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:06.547 * Looking for test storage... 00:34:06.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.547 --rc genhtml_branch_coverage=1 00:34:06.547 --rc genhtml_function_coverage=1 00:34:06.547 --rc genhtml_legend=1 00:34:06.547 --rc geninfo_all_blocks=1 00:34:06.547 --rc geninfo_unexecuted_blocks=1 00:34:06.547 00:34:06.547 ' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.547 --rc genhtml_branch_coverage=1 00:34:06.547 --rc genhtml_function_coverage=1 00:34:06.547 --rc genhtml_legend=1 00:34:06.547 --rc geninfo_all_blocks=1 00:34:06.547 --rc geninfo_unexecuted_blocks=1 00:34:06.547 00:34:06.547 ' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.547 --rc genhtml_branch_coverage=1 00:34:06.547 --rc genhtml_function_coverage=1 00:34:06.547 --rc genhtml_legend=1 00:34:06.547 --rc geninfo_all_blocks=1 00:34:06.547 --rc geninfo_unexecuted_blocks=1 00:34:06.547 00:34:06.547 ' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.547 --rc genhtml_branch_coverage=1 00:34:06.547 --rc genhtml_function_coverage=1 00:34:06.547 --rc genhtml_legend=1 00:34:06.547 --rc geninfo_all_blocks=1 00:34:06.547 --rc geninfo_unexecuted_blocks=1 00:34:06.547 00:34:06.547 ' 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.547 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.548 00:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.453 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:08.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:08.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:08.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:08.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:08.454 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:08.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:34:08.713 00:34:08.713 --- 10.0.0.2 ping statistics --- 00:34:08.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.713 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:34:08.713 00:34:08.713 --- 10.0.0.1 ping statistics --- 00:34:08.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.713 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3604838 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3604838 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3604838 ']' 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:08.713 00:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.974 [2024-11-10 00:07:34.983414] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:34:08.974 [2024-11-10 00:07:34.983577] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.974 [2024-11-10 00:07:35.148823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:09.232 [2024-11-10 00:07:35.287841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.233 [2024-11-10 00:07:35.287934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.233 [2024-11-10 00:07:35.287960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.233 [2024-11-10 00:07:35.287984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.233 [2024-11-10 00:07:35.288004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.233 [2024-11-10 00:07:35.294629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.233 [2024-11-10 00:07:35.294641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3604838 00:34:09.798 00:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:10.058 [2024-11-10 00:07:36.183983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.058 00:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:10.624 Malloc0 00:34:10.624 00:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:10.883 00:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:11.141 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.398 [2024-11-10 00:07:37.377676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.398 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:11.658 [2024-11-10 00:07:37.646386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3605169 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3605169 /var/tmp/bdevperf.sock 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3605169 ']' 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:11.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:11.658 00:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:12.596 00:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:12.596 00:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:34:12.596 00:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:12.853 00:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:13.420 Nvme0n1 00:34:13.420 00:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:13.989 Nvme0n1 00:34:13.989 00:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:13.989 00:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:16.527 00:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:16.527 00:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:16.527 00:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:16.528 00:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.902 00:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:18.161 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:18.161 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:18.161 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.161 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:18.420 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.420 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:18.420 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.420 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:18.677 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.677 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:18.677 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.677 00:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:18.935 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.935 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:18.935 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.935 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:19.193 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.193 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:19.193 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:19.451 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:19.716 00:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:21.104 00:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:21.104 00:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:21.104 00:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.104 00:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:21.105 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.105 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:21.105 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.105 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:21.363 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.363 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:21.363 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.363 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:21.622 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.622 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:21.622 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.622 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:21.880 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.880 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:21.880 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.881 00:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:22.139 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.139 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:22.139 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.139 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:22.403 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.403 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:22.403 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:22.662 00:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:22.922 00:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.310 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:24.569 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.569 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:24.569 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.569 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:24.828 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.828 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:24.828 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.828 00:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:25.087 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.087 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:25.087 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.087 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:25.346 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.346 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:25.346 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.346 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:25.605 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.605 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:25.605 00:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:25.863 00:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:26.434 00:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:27.371 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:27.371 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:27.371 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.371 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:27.628 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.628 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:27.628 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.628 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:27.887 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.887 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:27.887 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.887 00:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:28.145 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.145 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:28.145 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.145 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:28.404 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.404 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:28.404 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.404 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:28.661 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.661 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:28.661 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.661 00:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:28.919 00:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.919 00:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:28.919 00:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:29.178 00:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:29.437 00:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.814 00:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:31.073 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.073 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:31.073 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.073 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.331 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.331 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.331 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.331 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.589 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.589 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:31.589 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.589 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:31.847 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.847 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:31.847 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.847 00:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:32.105 00:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.105 00:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:32.105 00:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:32.363 00:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:32.623 00:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:34.010 00:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:34.010 00:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:34.010 00:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.010 00:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:34.010 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:34.010 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:34.010 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.010 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:34.269 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.269 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:34.269 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.269 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:34.528 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.528 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:34.528 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.528 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:34.785 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.785 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:34.785 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.785 00:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:35.043 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:35.043 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:35.043 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.043 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.303 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.303 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:35.561 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:35.561 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:35.818 00:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:36.091 00:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:37.108 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:37.108 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:37.108 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.108 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.366 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.366 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:37.366 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.366 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.933 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.933 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.933 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.933 00:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:37.933 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.933 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:37.933 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.933 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.191 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.191 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:38.191 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.191 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.449 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.449 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:38.449 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.449 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.016 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.016 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:39.016 00:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:39.016 00:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:39.275 00:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.649 00:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.908 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.908 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.908 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.908 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:41.166 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.166 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:41.166 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.166 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.424 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.424 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:41.424 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.424 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.682 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.682 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:41.682 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.682 00:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:42.247 00:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.247 00:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:42.247 00:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:42.247 00:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:42.506 00:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.882 00:08:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.141 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.141 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.141 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.141 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.405 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.406 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.406 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.406 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.667 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.667 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:44.667 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.667 00:08:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.924 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.924 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:44.924 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.924 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.182 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.182 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:45.182 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:45.751 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:45.751 00:08:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:47.137 00:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:47.138 00:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:47.138 00:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.138 00:08:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.138 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.138 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:47.138 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.138 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.396 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.396 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.396 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.396 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:47.654 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.654 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:47.654 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.654 00:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.913 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.913 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.913 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.913 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.171 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.171 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:48.171 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.171 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3605169 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3605169 ']' 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3605169 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3605169 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3605169' 00:34:48.429 killing process with pid 3605169 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3605169 00:34:48.429 00:08:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3605169 00:34:48.429 { 00:34:48.429 "results": [ 00:34:48.429 { 00:34:48.429 "job": "Nvme0n1", 00:34:48.429 "core_mask": "0x4", 00:34:48.429 "workload": "verify", 00:34:48.429 "status": "terminated", 00:34:48.429 "verify_range": { 00:34:48.429 "start": 0, 00:34:48.429 "length": 16384 00:34:48.429 }, 00:34:48.429 "queue_depth": 128, 00:34:48.429 "io_size": 4096, 00:34:48.429 "runtime": 34.300932, 00:34:48.429 "iops": 5904.591746953115, 00:34:48.429 "mibps": 23.064811511535606, 00:34:48.429 "io_failed": 0, 00:34:48.429 "io_timeout": 0, 00:34:48.429 "avg_latency_us": 21638.006571161426, 00:34:48.429 "min_latency_us": 1735.4903703703703, 00:34:48.429 "max_latency_us": 4076242.1096296296 00:34:48.429 } 00:34:48.429 ], 00:34:48.429 "core_count": 1 00:34:48.429 } 00:34:49.390 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3605169 00:34:49.390 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:49.390 [2024-11-10 00:07:37.749523] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:34:49.390 [2024-11-10 00:07:37.749702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605169 ] 00:34:49.390 [2024-11-10 00:07:37.887201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.390 [2024-11-10 00:07:38.009846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.390 Running I/O for 90 seconds... 00:34:49.390 6198.00 IOPS, 24.21 MiB/s [2024-11-09T23:08:15.591Z] 6259.50 IOPS, 24.45 MiB/s [2024-11-09T23:08:15.591Z] 6277.00 IOPS, 24.52 MiB/s [2024-11-09T23:08:15.591Z] 6286.25 IOPS, 24.56 MiB/s [2024-11-09T23:08:15.591Z] 6291.80 IOPS, 24.58 MiB/s [2024-11-09T23:08:15.591Z] 6269.50 IOPS, 24.49 MiB/s [2024-11-09T23:08:15.591Z] 6276.71 IOPS, 24.52 MiB/s [2024-11-09T23:08:15.591Z] 6280.12 IOPS, 24.53 MiB/s [2024-11-09T23:08:15.591Z] 6280.67 IOPS, 24.53 MiB/s [2024-11-09T23:08:15.591Z] 6272.40 IOPS, 24.50 MiB/s [2024-11-09T23:08:15.591Z] 6263.73 IOPS, 24.47 MiB/s [2024-11-09T23:08:15.591Z] 6258.33 IOPS, 24.45 MiB/s [2024-11-09T23:08:15.591Z] 6262.23 IOPS, 24.46 MiB/s [2024-11-09T23:08:15.591Z] 6259.79 IOPS, 24.45 MiB/s [2024-11-09T23:08:15.591Z] 6256.13 IOPS, 24.44 MiB/s [2024-11-09T23:08:15.591Z] [2024-11-10 00:07:55.300673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.300771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.300822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.300849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.300900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.300926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.300978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.390 [2024-11-10 00:07:55.301752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.390 [2024-11-10 00:07:55.301787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.301813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.301848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.301874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.301909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.301950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.301986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.302929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.302980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.303955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.303981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.391 [2024-11-10 00:07:55.304793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.391 [2024-11-10 00:07:55.304828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.304854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.304889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.304929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.304964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.304989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.305642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.305682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.306758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.306820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.306882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.306959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.306992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.392 [2024-11-10 00:07:55.307760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.392 [2024-11-10 00:07:55.307821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.392 [2024-11-10 00:07:55.307856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.307882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.307933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.307974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.308959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.308983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.309967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.309991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.310026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.310050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.310084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.310108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.393 [2024-11-10 00:07:55.310141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.393 [2024-11-10 00:07:55.310165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.394 [2024-11-10 00:07:55.310222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.394 [2024-11-10 00:07:55.310293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.310717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.310742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.311725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.311758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.311800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.311827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.311864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.311890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.311953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.312925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.312976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.394 [2024-11-10 00:07:55.313547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.394 [2024-11-10 00:07:55.313571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.313975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.313998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.314941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.314968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.315005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.315030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.315066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.315091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.315127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.315152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.315189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.315214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.315266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.315305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.315342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.315366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.316275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.316347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.316416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.316493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.316554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.395 [2024-11-10 00:07:55.316654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.395 [2024-11-10 00:07:55.316691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.396 [2024-11-10 00:07:55.316716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.316751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.396 [2024-11-10 00:07:55.316776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.316811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.396 [2024-11-10 00:07:55.316836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.316871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.316896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.316946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.316971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.396 [2024-11-10 00:07:55.317829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.317939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.317967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.318947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.318972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.319007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.319031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.319065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.319091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.319147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.396 [2024-11-10 00:07:55.319172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.396 [2024-11-10 00:07:55.319207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.319942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.319977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.397 [2024-11-10 00:07:55.320445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.320505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.320562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.320672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.320736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.320798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.320835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.320861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.321857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.321890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.321934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.321961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.321997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.397 [2024-11-10 00:07:55.322689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.397 [2024-11-10 00:07:55.322725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.322751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.322787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.322813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.322848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.322888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.322923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.322967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.323936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.323975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.324964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.324988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.325038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.325077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.325114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.325139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.325173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.325198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.325232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.398 [2024-11-10 00:07:55.325257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.398 [2024-11-10 00:07:55.325292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.325316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.325356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.325396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.325445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.325469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.325518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.325543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.326964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.326998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.327022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.327082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.327948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.327982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.328006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.328040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.399 [2024-11-10 00:07:55.328064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.328098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.328123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.328157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.328181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.328215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.328239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.328273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.328297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.399 [2024-11-10 00:07:55.328330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.399 [2024-11-10 00:07:55.328354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.328959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.328983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.329950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.329973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.400 [2024-11-10 00:07:55.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.400 [2024-11-10 00:07:55.330665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.400 [2024-11-10 00:07:55.330727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.400 [2024-11-10 00:07:55.330785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.400 [2024-11-10 00:07:55.330845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.400 [2024-11-10 00:07:55.330896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.330925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.331881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.331913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.331955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.331981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.332927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.332977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.333980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.334013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.334036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.334068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.334092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.334124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.334147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.334196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.334225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.334278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.401 [2024-11-10 00:07:55.334304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.401 [2024-11-10 00:07:55.334339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.334967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.334991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.335447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.335470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.336945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.336980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.337003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.402 [2024-11-10 00:07:55.337060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.402 [2024-11-10 00:07:55.337686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.402 [2024-11-10 00:07:55.337721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.337746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.337783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.337808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.337844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.337869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.337921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.337959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.337993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.403 [2024-11-10 00:07:55.338072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.338953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.338986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.403 [2024-11-10 00:07:55.339829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.403 [2024-11-10 00:07:55.339854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.339905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.339944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.339978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.404 [2024-11-10 00:07:55.340579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.340675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.340745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.340804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.340839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.340879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.341750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.341781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.341835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.341863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.341900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.341925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.341961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.341986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.342940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.342973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.343013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.343047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.343070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.343102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.343126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.343158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.343180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.343213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.404 [2024-11-10 00:07:55.343235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.404 [2024-11-10 00:07:55.343267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.343937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.343975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.344926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.344950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.345001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.345026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.345075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.345099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.345131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.345154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.345186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.345210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.345243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.345266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.345300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.345323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.346288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.346357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.346493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.346573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.405 [2024-11-10 00:07:55.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.405 [2024-11-10 00:07:55.346711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.346735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.346768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.346792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.346824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.346848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.346882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.346905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.346978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.347034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.347929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.347963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.406 [2024-11-10 00:07:55.347986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.348951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.406 [2024-11-10 00:07:55.348975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.406 [2024-11-10 00:07:55.349009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.349960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.349983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.407 [2024-11-10 00:07:55.350501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.350560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.350642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.350677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.350700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.351584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.351663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.351744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.351809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.351869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.351945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.351994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.352018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.352053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.352092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.352125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.352181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.352204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.352235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.352258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.407 [2024-11-10 00:07:55.352291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.407 [2024-11-10 00:07:55.352313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.352959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.352982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.353951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.353986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.408 [2024-11-10 00:07:55.354398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.408 [2024-11-10 00:07:55.354447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.354963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.354986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.355019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.355043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.355897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.355940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.356692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.356749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.356827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.356887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.356949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.356985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.409 [2024-11-10 00:07:55.357644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.409 [2024-11-10 00:07:55.357704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.409 [2024-11-10 00:07:55.357739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.357765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.357808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.357835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.357870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.357921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.357977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.358967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.358999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.359953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.359976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.360009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.360032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.360065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.360090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.360122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.410 [2024-11-10 00:07:55.360145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.410 [2024-11-10 00:07:55.360178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.411 [2024-11-10 00:07:55.360213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.360252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.360278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.360311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.360335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.360733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.360766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.360844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.360875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.360928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.360963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.361930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.361971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.362955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.362992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.363016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.363053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.363078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.363115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.363142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.363180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.411 [2024-11-10 00:07:55.363204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.411 [2024-11-10 00:07:55.363243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.363933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.363976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:07:55.364741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:07:55.364772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.412 5881.12 IOPS, 22.97 MiB/s [2024-11-09T23:08:15.613Z] 5535.18 IOPS, 21.62 MiB/s [2024-11-09T23:08:15.613Z] 5227.67 IOPS, 20.42 MiB/s [2024-11-09T23:08:15.613Z] 4952.53 IOPS, 19.35 MiB/s [2024-11-09T23:08:15.613Z] 4984.55 IOPS, 19.47 MiB/s [2024-11-09T23:08:15.613Z] 5040.95 IOPS, 19.69 MiB/s [2024-11-09T23:08:15.613Z] 5116.64 IOPS, 19.99 MiB/s [2024-11-09T23:08:15.613Z] 5259.00 IOPS, 20.54 MiB/s [2024-11-09T23:08:15.613Z] 5395.50 IOPS, 21.08 MiB/s [2024-11-09T23:08:15.613Z] 5508.68 IOPS, 21.52 MiB/s [2024-11-09T23:08:15.613Z] 5536.42 IOPS, 21.63 MiB/s [2024-11-09T23:08:15.613Z] 5563.96 IOPS, 21.73 MiB/s [2024-11-09T23:08:15.613Z] 5590.46 IOPS, 21.84 MiB/s [2024-11-09T23:08:15.613Z] 5654.17 IOPS, 22.09 MiB/s [2024-11-09T23:08:15.613Z] 5743.73 IOPS, 22.44 MiB/s [2024-11-09T23:08:15.613Z] 5836.74 IOPS, 22.80 MiB/s [2024-11-09T23:08:15.613Z] [2024-11-10 00:08:11.907078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:08:11.907185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:08:11.907290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:08:11.907363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:08:11.907424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:08:11.907486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.412 [2024-11-10 00:08:11.907547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.412 [2024-11-10 00:08:11.907636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.412 [2024-11-10 00:08:11.907702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.412 [2024-11-10 00:08:11.907765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.412 [2024-11-10 00:08:11.907828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.412 [2024-11-10 00:08:11.907866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.412 [2024-11-10 00:08:11.907891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.907943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.907968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.908747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.908953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.908990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.909644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.909947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.909971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.910007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.910031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.913131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.913171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.913236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.913264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.913302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.413 [2024-11-10 00:08:11.913328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.913365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.913390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.413 [2024-11-10 00:08:11.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.413 [2024-11-10 00:08:11.913457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.913525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.913594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.913660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.913740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.913802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.913863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.913924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.913960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.913985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.914933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.914996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.915123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.915318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.915396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.915459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.915712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.915739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.917492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.917530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.917595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.414 [2024-11-10 00:08:11.917639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.917678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.917705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.917777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.414 [2024-11-10 00:08:11.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.414 [2024-11-10 00:08:11.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.917877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.917903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.917940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.917966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.918842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.918942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.918969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.920460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.920603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.920740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.920868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.920906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.920947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.921000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.921027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.921063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.921089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.921134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.921176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.922297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.415 [2024-11-10 00:08:11.922370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.922434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.922494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.922561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.922652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.922714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.415 [2024-11-10 00:08:11.922751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.415 [2024-11-10 00:08:11.922777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.922813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.922840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.922886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.922912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.922950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.922976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.923056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.923117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.923178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.923256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.923317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.923383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.923442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.923503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.923539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.923563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.925459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.925541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.925691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.925937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.925977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.926100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.926338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.926402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.416 [2024-11-10 00:08:11.926681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.416 [2024-11-10 00:08:11.926744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.416 [2024-11-10 00:08:11.926780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.926806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.926842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.926868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.926927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.926952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.926987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.927011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.927046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.927071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.927118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.927144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.929685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.929747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.929808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.929912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.929962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.929989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.930067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.930129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.930198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.930261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.930338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.930415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.930452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.930476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.931156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.931224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.931285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.931345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.931406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.931465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.417 [2024-11-10 00:08:11.931526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.931627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.931667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.931693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.933470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.933506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.933600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.417 [2024-11-10 00:08:11.933639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.417 [2024-11-10 00:08:11.933679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.933705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.933742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.933768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.933805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.933832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.933868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.933910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.933961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.933987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.934877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.934962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.934997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.935026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.935062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.935087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.935122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.935147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.935181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.935206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.935241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.935266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.935302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.935327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.937734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.937773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.937821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.937849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.937912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.937939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.937976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.938196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.938263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.938324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.418 [2024-11-10 00:08:11.938389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.418 [2024-11-10 00:08:11.938652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.418 [2024-11-10 00:08:11.938678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.938715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.938741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.938777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.938803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.938839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.938865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.938901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.938953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.939050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.939255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.939456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.939829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.939855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.941618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.941680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.941726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.941753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.941791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.941818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.941855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.941892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.941928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.941954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.941991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.942905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.942948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.942972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.943008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.419 [2024-11-10 00:08:11.943032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:49.419 [2024-11-10 00:08:11.943067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.419 [2024-11-10 00:08:11.943092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.943127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.943151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.943186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.943212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.943248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.943272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.946078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.946180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.946767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.946830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.946893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.946950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.946976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.947033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.947092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.947284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.947427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.947974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.420 [2024-11-10 00:08:11.947998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:49.420 [2024-11-10 00:08:11.950688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.420 [2024-11-10 00:08:11.950724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.950764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.950799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.950837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.950863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.950916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.950942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.950993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.951312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.951371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.951430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.951645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.951784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.951855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.951916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.951963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.952003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.952027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.952062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.952085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.952120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.952145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.954440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.954536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.954628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.954708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.954856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.954918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.954954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.955001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.955040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.955065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.955115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.955140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.955175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.955199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.955234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.421 [2024-11-10 00:08:11.955259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.955294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.421 [2024-11-10 00:08:11.955318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:49.421 [2024-11-10 00:08:11.955353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.955436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.955495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.955559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.955669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.955735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.955797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.955860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.955955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.955991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.956016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.956074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.956102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.957919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.957953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.957997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.958372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.958431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.958490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.958579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.958669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.958940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.958992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.959016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.959075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.959139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.959256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.422 [2024-11-10 00:08:11.959645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.959708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.422 [2024-11-10 00:08:11.959768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:49.422 [2024-11-10 00:08:11.959803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.959828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.959869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.959894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.962918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.962970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.423 [2024-11-10 00:08:11.963452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.423 [2024-11-10 00:08:11.963514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.423 [2024-11-10 00:08:11.963598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.423 [2024-11-10 00:08:11.963679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:49.423 [2024-11-10 00:08:11.963782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:49.423 [2024-11-10 00:08:11.963808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:49.423 5882.00 IOPS, 22.98 MiB/s [2024-11-09T23:08:15.624Z] 5894.33 IOPS, 23.02 MiB/s [2024-11-09T23:08:15.624Z] 5902.71 IOPS, 23.06 MiB/s [2024-11-09T23:08:15.624Z] Received shutdown signal, test time was about 34.301724 seconds 00:34:49.423 00:34:49.423 Latency(us) 00:34:49.423 [2024-11-09T23:08:15.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.423 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:49.423 Verification LBA range: start 0x0 length 0x4000 00:34:49.423 Nvme0n1 : 34.30 5904.59 23.06 0.00 0.00 21638.01 1735.49 4076242.11 00:34:49.423 [2024-11-09T23:08:15.624Z] =================================================================================================================== 00:34:49.423 [2024-11-09T23:08:15.624Z] Total : 5904.59 23.06 0.00 0.00 21638.01 1735.49 4076242.11 00:34:49.423 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.683 rmmod nvme_tcp 00:34:49.683 rmmod nvme_fabrics 00:34:49.683 rmmod nvme_keyring 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3604838 ']' 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3604838 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3604838 ']' 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3604838 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3604838 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3604838' 00:34:49.683 killing process with pid 3604838 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3604838 00:34:49.683 00:08:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3604838 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.058 00:08:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.968 00:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.228 00:34:53.228 real 0m46.588s 00:34:53.228 user 2m20.022s 00:34:53.228 sys 0m10.432s 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:53.228 ************************************ 00:34:53.228 END TEST nvmf_host_multipath_status 00:34:53.228 ************************************ 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.228 ************************************ 00:34:53.228 START TEST nvmf_discovery_remove_ifc 00:34:53.228 ************************************ 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:53.228 * Looking for test storage... 00:34:53.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.228 --rc genhtml_branch_coverage=1 00:34:53.228 --rc genhtml_function_coverage=1 00:34:53.228 --rc genhtml_legend=1 00:34:53.228 --rc geninfo_all_blocks=1 00:34:53.228 --rc geninfo_unexecuted_blocks=1 00:34:53.228 00:34:53.228 ' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.228 --rc genhtml_branch_coverage=1 00:34:53.228 --rc genhtml_function_coverage=1 00:34:53.228 --rc genhtml_legend=1 00:34:53.228 --rc geninfo_all_blocks=1 00:34:53.228 --rc geninfo_unexecuted_blocks=1 00:34:53.228 00:34:53.228 ' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.228 --rc genhtml_branch_coverage=1 00:34:53.228 --rc genhtml_function_coverage=1 00:34:53.228 --rc genhtml_legend=1 00:34:53.228 --rc geninfo_all_blocks=1 00:34:53.228 --rc geninfo_unexecuted_blocks=1 00:34:53.228 00:34:53.228 ' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.228 --rc genhtml_branch_coverage=1 00:34:53.228 --rc genhtml_function_coverage=1 00:34:53.228 --rc genhtml_legend=1 00:34:53.228 --rc geninfo_all_blocks=1 00:34:53.228 --rc geninfo_unexecuted_blocks=1 00:34:53.228 00:34:53.228 ' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.228 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.229 00:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:55.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:55.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:55.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:55.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:55.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:34:55.759 00:34:55.759 --- 10.0.0.2 ping statistics --- 00:34:55.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.759 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:55.759 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:34:55.760 00:34:55.760 --- 10.0.0.1 ping statistics --- 00:34:55.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.760 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3611849 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3611849 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3611849 ']' 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:55.760 00:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.760 [2024-11-10 00:08:21.661694] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:34:55.760 [2024-11-10 00:08:21.661856] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.760 [2024-11-10 00:08:21.806367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.760 [2024-11-10 00:08:21.938688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.760 [2024-11-10 00:08:21.938780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.760 [2024-11-10 00:08:21.938805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.760 [2024-11-10 00:08:21.938831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.760 [2024-11-10 00:08:21.938851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.760 [2024-11-10 00:08:21.940482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.694 [2024-11-10 00:08:22.650802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.694 [2024-11-10 00:08:22.659063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:56.694 null0 00:34:56.694 [2024-11-10 00:08:22.690947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3612009 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3612009 /tmp/host.sock 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3612009 ']' 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:56.694 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:56.694 00:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.694 [2024-11-10 00:08:22.801646] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:34:56.694 [2024-11-10 00:08:22.801801] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612009 ] 00:34:56.953 [2024-11-10 00:08:22.937382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.953 [2024-11-10 00:08:23.060992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.890 00:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.150 00:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.150 00:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:58.150 00:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.150 00:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.091 [2024-11-10 00:08:25.192993] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:59.091 [2024-11-10 00:08:25.193050] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:59.091 [2024-11-10 00:08:25.193090] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:59.351 [2024-11-10 00:08:25.319546] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:59.351 [2024-11-10 00:08:25.541403] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:59.351 [2024-11-10 00:08:25.543264] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:59.351 [2024-11-10 00:08:25.545403] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:59.351 [2024-11-10 00:08:25.545500] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:59.351 [2024-11-10 00:08:25.545593] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:59.351 [2024-11-10 00:08:25.545633] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:59.351 [2024-11-10 00:08:25.545690] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.351 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.610 [2024-11-10 00:08:25.552127] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:59.610 00:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:00.990 00:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:01.924 00:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.860 00:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:03.857 00:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.795 00:08:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.795 [2024-11-10 00:08:30.987575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:04.796 [2024-11-10 00:08:30.987699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.796 [2024-11-10 00:08:30.987741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.796 [2024-11-10 00:08:30.987774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.796 [2024-11-10 00:08:30.987794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.796 [2024-11-10 00:08:30.987815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.796 [2024-11-10 00:08:30.987836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.796 [2024-11-10 00:08:30.987857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.796 [2024-11-10 00:08:30.987893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.796 [2024-11-10 00:08:30.987914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.796 [2024-11-10 00:08:30.987933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.796 [2024-11-10 00:08:30.987971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:05.055 [2024-11-10 00:08:30.997583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:05.055 [2024-11-10 00:08:31.007645] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:05.055 [2024-11-10 00:08:31.007681] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:05.055 [2024-11-10 00:08:31.007698] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:05.055 [2024-11-10 00:08:31.007713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:05.055 [2024-11-10 00:08:31.007779] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:05.055 00:08:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.055 00:08:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.991 [2024-11-10 00:08:32.034760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:05.991 [2024-11-10 00:08:32.034864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:05.991 [2024-11-10 00:08:32.034907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:35:05.991 [2024-11-10 00:08:32.034982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:35:05.991 [2024-11-10 00:08:32.035723] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:05.991 [2024-11-10 00:08:32.035797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:05.991 [2024-11-10 00:08:32.035835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:05.991 [2024-11-10 00:08:32.035864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:05.991 [2024-11-10 00:08:32.035887] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:05.991 [2024-11-10 00:08:32.035906] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:05.991 [2024-11-10 00:08:32.035921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:05.991 [2024-11-10 00:08:32.035945] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:05.991 [2024-11-10 00:08:32.035963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.991 00:08:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.925 [2024-11-10 00:08:33.038492] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:06.925 [2024-11-10 00:08:33.038541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:06.925 [2024-11-10 00:08:33.038573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:06.925 [2024-11-10 00:08:33.038603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:06.925 [2024-11-10 00:08:33.038640] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:06.925 [2024-11-10 00:08:33.038659] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:06.925 [2024-11-10 00:08:33.038673] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:06.925 [2024-11-10 00:08:33.038684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:06.925 [2024-11-10 00:08:33.038750] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:06.925 [2024-11-10 00:08:33.038824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.925 [2024-11-10 00:08:33.038880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.925 [2024-11-10 00:08:33.038914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.925 [2024-11-10 00:08:33.038951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.925 [2024-11-10 00:08:33.038975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.925 [2024-11-10 00:08:33.038998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.925 [2024-11-10 00:08:33.039023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.925 [2024-11-10 00:08:33.039046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.925 [2024-11-10 00:08:33.039071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.925 [2024-11-10 00:08:33.039094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.925 [2024-11-10 00:08:33.039115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:06.925 [2024-11-10 00:08:33.039207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:35:06.925 [2024-11-10 00:08:33.040194] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:06.925 [2024-11-10 00:08:33.040229] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:06.925 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.926 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.187 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.188 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:07.188 00:08:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:08.127 00:08:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.065 [2024-11-10 00:08:35.060161] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:09.065 [2024-11-10 00:08:35.060219] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:09.065 [2024-11-10 00:08:35.060271] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:09.065 [2024-11-10 00:08:35.146584] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:09.065 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.322 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:09.322 00:08:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.322 [2024-11-10 00:08:35.330020] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:09.322 [2024-11-10 00:08:35.331548] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:09.322 [2024-11-10 00:08:35.333940] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:09.322 [2024-11-10 00:08:35.334021] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:09.322 [2024-11-10 00:08:35.334109] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:09.322 [2024-11-10 00:08:35.334150] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:09.322 [2024-11-10 00:08:35.334175] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:09.322 [2024-11-10 00:08:35.379911] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3612009 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3612009 ']' 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3612009 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3612009 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3612009' 00:35:10.259 killing process with pid 3612009 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3612009 00:35:10.259 00:08:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3612009 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.201 rmmod nvme_tcp 00:35:11.201 rmmod nvme_fabrics 00:35:11.201 rmmod nvme_keyring 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3611849 ']' 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3611849 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3611849 ']' 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3611849 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3611849 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3611849' 00:35:11.201 killing process with pid 3611849 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3611849 00:35:11.201 00:08:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3611849 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.581 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:12.582 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.582 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.582 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.582 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.582 00:08:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:14.486 00:35:14.486 real 0m21.301s 00:35:14.486 user 0m31.362s 00:35:14.486 sys 0m3.313s 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.486 ************************************ 00:35:14.486 END TEST nvmf_discovery_remove_ifc 00:35:14.486 ************************************ 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.486 ************************************ 00:35:14.486 START TEST nvmf_identify_kernel_target 00:35:14.486 ************************************ 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:14.486 * Looking for test storage... 00:35:14.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:35:14.486 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.745 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:14.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.745 --rc genhtml_branch_coverage=1 00:35:14.745 --rc genhtml_function_coverage=1 00:35:14.745 --rc genhtml_legend=1 00:35:14.745 --rc geninfo_all_blocks=1 00:35:14.746 --rc geninfo_unexecuted_blocks=1 00:35:14.746 00:35:14.746 ' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:14.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.746 --rc genhtml_branch_coverage=1 00:35:14.746 --rc genhtml_function_coverage=1 00:35:14.746 --rc genhtml_legend=1 00:35:14.746 --rc geninfo_all_blocks=1 00:35:14.746 --rc geninfo_unexecuted_blocks=1 00:35:14.746 00:35:14.746 ' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:14.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.746 --rc genhtml_branch_coverage=1 00:35:14.746 --rc genhtml_function_coverage=1 00:35:14.746 --rc genhtml_legend=1 00:35:14.746 --rc geninfo_all_blocks=1 00:35:14.746 --rc geninfo_unexecuted_blocks=1 00:35:14.746 00:35:14.746 ' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:14.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.746 --rc genhtml_branch_coverage=1 00:35:14.746 --rc genhtml_function_coverage=1 00:35:14.746 --rc genhtml_legend=1 00:35:14.746 --rc geninfo_all_blocks=1 00:35:14.746 --rc geninfo_unexecuted_blocks=1 00:35:14.746 00:35:14.746 ' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:14.746 00:08:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:16.649 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:16.649 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:16.649 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:16.649 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.649 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.650 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:35:16.909 00:35:16.909 --- 10.0.0.2 ping statistics --- 00:35:16.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.909 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:35:16.909 00:35:16.909 --- 10.0.0.1 ping statistics --- 00:35:16.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.909 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.909 00:08:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:17.844 Waiting for block devices as requested 00:35:17.844 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:18.110 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:18.110 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:18.370 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:18.370 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:18.370 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:18.370 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:18.630 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:18.630 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:18.630 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:18.630 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:18.887 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:18.887 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:18.887 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:18.887 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:19.146 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:19.146 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:19.146 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:19.404 No valid GPT data, bailing 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:19.404 00:35:19.404 Discovery Log Number of Records 2, Generation counter 2 00:35:19.404 =====Discovery Log Entry 0====== 00:35:19.404 trtype: tcp 00:35:19.404 adrfam: ipv4 00:35:19.404 subtype: current discovery subsystem 00:35:19.404 treq: not specified, sq flow control disable supported 00:35:19.404 portid: 1 00:35:19.404 trsvcid: 4420 00:35:19.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:19.404 traddr: 10.0.0.1 00:35:19.404 eflags: none 00:35:19.404 sectype: none 00:35:19.404 =====Discovery Log Entry 1====== 00:35:19.404 trtype: tcp 00:35:19.404 adrfam: ipv4 00:35:19.404 subtype: nvme subsystem 00:35:19.404 treq: not specified, sq flow control disable supported 00:35:19.404 portid: 1 00:35:19.404 trsvcid: 4420 00:35:19.404 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:19.404 traddr: 10.0.0.1 00:35:19.404 eflags: none 00:35:19.404 sectype: none 00:35:19.404 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:19.404 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:19.663 ===================================================== 00:35:19.663 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:19.663 ===================================================== 00:35:19.663 Controller Capabilities/Features 00:35:19.663 ================================ 00:35:19.663 Vendor ID: 0000 00:35:19.663 Subsystem Vendor ID: 0000 00:35:19.663 Serial Number: a8b0aadd82342d1ab000 00:35:19.663 Model Number: Linux 00:35:19.663 Firmware Version: 6.8.9-20 00:35:19.663 Recommended Arb Burst: 0 00:35:19.663 IEEE OUI Identifier: 00 00 00 00:35:19.663 Multi-path I/O 00:35:19.663 May have multiple subsystem ports: No 00:35:19.663 May have multiple controllers: No 00:35:19.663 Associated with SR-IOV VF: No 00:35:19.663 Max Data Transfer Size: Unlimited 00:35:19.663 Max Number of Namespaces: 0 00:35:19.663 Max Number of I/O Queues: 1024 00:35:19.663 NVMe Specification Version (VS): 1.3 00:35:19.663 NVMe Specification Version (Identify): 1.3 00:35:19.663 Maximum Queue Entries: 1024 00:35:19.663 Contiguous Queues Required: No 00:35:19.663 Arbitration Mechanisms Supported 00:35:19.663 Weighted Round Robin: Not Supported 00:35:19.663 Vendor Specific: Not Supported 00:35:19.663 Reset Timeout: 7500 ms 00:35:19.663 Doorbell Stride: 4 bytes 00:35:19.663 NVM Subsystem Reset: Not Supported 00:35:19.663 Command Sets Supported 00:35:19.663 NVM Command Set: Supported 00:35:19.663 Boot Partition: Not Supported 00:35:19.663 Memory Page Size Minimum: 4096 bytes 00:35:19.663 Memory Page Size Maximum: 4096 bytes 00:35:19.663 Persistent Memory Region: Not Supported 00:35:19.663 Optional Asynchronous Events Supported 00:35:19.663 Namespace Attribute Notices: Not Supported 00:35:19.663 Firmware Activation Notices: Not Supported 00:35:19.663 ANA Change Notices: Not Supported 00:35:19.663 PLE Aggregate Log Change Notices: Not Supported 00:35:19.663 LBA Status Info Alert Notices: Not Supported 00:35:19.663 EGE Aggregate Log Change Notices: Not Supported 00:35:19.663 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.663 Zone Descriptor Change Notices: Not Supported 00:35:19.663 Discovery Log Change Notices: Supported 00:35:19.663 Controller Attributes 00:35:19.663 128-bit Host Identifier: Not Supported 00:35:19.663 Non-Operational Permissive Mode: Not Supported 00:35:19.663 NVM Sets: Not Supported 00:35:19.663 Read Recovery Levels: Not Supported 00:35:19.663 Endurance Groups: Not Supported 00:35:19.663 Predictable Latency Mode: Not Supported 00:35:19.663 Traffic Based Keep ALive: Not Supported 00:35:19.663 Namespace Granularity: Not Supported 00:35:19.663 SQ Associations: Not Supported 00:35:19.663 UUID List: Not Supported 00:35:19.664 Multi-Domain Subsystem: Not Supported 00:35:19.664 Fixed Capacity Management: Not Supported 00:35:19.664 Variable Capacity Management: Not Supported 00:35:19.664 Delete Endurance Group: Not Supported 00:35:19.664 Delete NVM Set: Not Supported 00:35:19.664 Extended LBA Formats Supported: Not Supported 00:35:19.664 Flexible Data Placement Supported: Not Supported 00:35:19.664 00:35:19.664 Controller Memory Buffer Support 00:35:19.664 ================================ 00:35:19.664 Supported: No 00:35:19.664 00:35:19.664 Persistent Memory Region Support 00:35:19.664 ================================ 00:35:19.664 Supported: No 00:35:19.664 00:35:19.664 Admin Command Set Attributes 00:35:19.664 ============================ 00:35:19.664 Security Send/Receive: Not Supported 00:35:19.664 Format NVM: Not Supported 00:35:19.664 Firmware Activate/Download: Not Supported 00:35:19.664 Namespace Management: Not Supported 00:35:19.664 Device Self-Test: Not Supported 00:35:19.664 Directives: Not Supported 00:35:19.664 NVMe-MI: Not Supported 00:35:19.664 Virtualization Management: Not Supported 00:35:19.664 Doorbell Buffer Config: Not Supported 00:35:19.664 Get LBA Status Capability: Not Supported 00:35:19.664 Command & Feature Lockdown Capability: Not Supported 00:35:19.664 Abort Command Limit: 1 00:35:19.664 Async Event Request Limit: 1 00:35:19.664 Number of Firmware Slots: N/A 00:35:19.664 Firmware Slot 1 Read-Only: N/A 00:35:19.664 Firmware Activation Without Reset: N/A 00:35:19.664 Multiple Update Detection Support: N/A 00:35:19.664 Firmware Update Granularity: No Information Provided 00:35:19.664 Per-Namespace SMART Log: No 00:35:19.664 Asymmetric Namespace Access Log Page: Not Supported 00:35:19.664 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:19.664 Command Effects Log Page: Not Supported 00:35:19.664 Get Log Page Extended Data: Supported 00:35:19.664 Telemetry Log Pages: Not Supported 00:35:19.664 Persistent Event Log Pages: Not Supported 00:35:19.664 Supported Log Pages Log Page: May Support 00:35:19.664 Commands Supported & Effects Log Page: Not Supported 00:35:19.664 Feature Identifiers & Effects Log Page:May Support 00:35:19.664 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.664 Data Area 4 for Telemetry Log: Not Supported 00:35:19.664 Error Log Page Entries Supported: 1 00:35:19.664 Keep Alive: Not Supported 00:35:19.664 00:35:19.664 NVM Command Set Attributes 00:35:19.664 ========================== 00:35:19.664 Submission Queue Entry Size 00:35:19.664 Max: 1 00:35:19.664 Min: 1 00:35:19.664 Completion Queue Entry Size 00:35:19.664 Max: 1 00:35:19.664 Min: 1 00:35:19.664 Number of Namespaces: 0 00:35:19.664 Compare Command: Not Supported 00:35:19.664 Write Uncorrectable Command: Not Supported 00:35:19.664 Dataset Management Command: Not Supported 00:35:19.664 Write Zeroes Command: Not Supported 00:35:19.664 Set Features Save Field: Not Supported 00:35:19.664 Reservations: Not Supported 00:35:19.664 Timestamp: Not Supported 00:35:19.664 Copy: Not Supported 00:35:19.664 Volatile Write Cache: Not Present 00:35:19.664 Atomic Write Unit (Normal): 1 00:35:19.664 Atomic Write Unit (PFail): 1 00:35:19.664 Atomic Compare & Write Unit: 1 00:35:19.664 Fused Compare & Write: Not Supported 00:35:19.664 Scatter-Gather List 00:35:19.664 SGL Command Set: Supported 00:35:19.664 SGL Keyed: Not Supported 00:35:19.664 SGL Bit Bucket Descriptor: Not Supported 00:35:19.664 SGL Metadata Pointer: Not Supported 00:35:19.664 Oversized SGL: Not Supported 00:35:19.664 SGL Metadata Address: Not Supported 00:35:19.664 SGL Offset: Supported 00:35:19.664 Transport SGL Data Block: Not Supported 00:35:19.664 Replay Protected Memory Block: Not Supported 00:35:19.664 00:35:19.664 Firmware Slot Information 00:35:19.664 ========================= 00:35:19.664 Active slot: 0 00:35:19.664 00:35:19.664 00:35:19.664 Error Log 00:35:19.664 ========= 00:35:19.664 00:35:19.664 Active Namespaces 00:35:19.664 ================= 00:35:19.664 Discovery Log Page 00:35:19.664 ================== 00:35:19.664 Generation Counter: 2 00:35:19.664 Number of Records: 2 00:35:19.664 Record Format: 0 00:35:19.664 00:35:19.664 Discovery Log Entry 0 00:35:19.664 ---------------------- 00:35:19.664 Transport Type: 3 (TCP) 00:35:19.664 Address Family: 1 (IPv4) 00:35:19.664 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:19.664 Entry Flags: 00:35:19.664 Duplicate Returned Information: 0 00:35:19.664 Explicit Persistent Connection Support for Discovery: 0 00:35:19.664 Transport Requirements: 00:35:19.664 Secure Channel: Not Specified 00:35:19.664 Port ID: 1 (0x0001) 00:35:19.664 Controller ID: 65535 (0xffff) 00:35:19.664 Admin Max SQ Size: 32 00:35:19.664 Transport Service Identifier: 4420 00:35:19.664 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:19.664 Transport Address: 10.0.0.1 00:35:19.664 Discovery Log Entry 1 00:35:19.664 ---------------------- 00:35:19.664 Transport Type: 3 (TCP) 00:35:19.664 Address Family: 1 (IPv4) 00:35:19.664 Subsystem Type: 2 (NVM Subsystem) 00:35:19.664 Entry Flags: 00:35:19.664 Duplicate Returned Information: 0 00:35:19.664 Explicit Persistent Connection Support for Discovery: 0 00:35:19.664 Transport Requirements: 00:35:19.664 Secure Channel: Not Specified 00:35:19.664 Port ID: 1 (0x0001) 00:35:19.664 Controller ID: 65535 (0xffff) 00:35:19.664 Admin Max SQ Size: 32 00:35:19.664 Transport Service Identifier: 4420 00:35:19.664 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:19.664 Transport Address: 10.0.0.1 00:35:19.664 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:19.664 get_feature(0x01) failed 00:35:19.664 get_feature(0x02) failed 00:35:19.664 get_feature(0x04) failed 00:35:19.664 ===================================================== 00:35:19.664 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:19.664 ===================================================== 00:35:19.664 Controller Capabilities/Features 00:35:19.664 ================================ 00:35:19.664 Vendor ID: 0000 00:35:19.664 Subsystem Vendor ID: 0000 00:35:19.664 Serial Number: 502b673f789fe55e32ee 00:35:19.664 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:19.664 Firmware Version: 6.8.9-20 00:35:19.664 Recommended Arb Burst: 6 00:35:19.664 IEEE OUI Identifier: 00 00 00 00:35:19.664 Multi-path I/O 00:35:19.664 May have multiple subsystem ports: Yes 00:35:19.664 May have multiple controllers: Yes 00:35:19.664 Associated with SR-IOV VF: No 00:35:19.664 Max Data Transfer Size: Unlimited 00:35:19.664 Max Number of Namespaces: 1024 00:35:19.664 Max Number of I/O Queues: 128 00:35:19.664 NVMe Specification Version (VS): 1.3 00:35:19.664 NVMe Specification Version (Identify): 1.3 00:35:19.664 Maximum Queue Entries: 1024 00:35:19.664 Contiguous Queues Required: No 00:35:19.664 Arbitration Mechanisms Supported 00:35:19.664 Weighted Round Robin: Not Supported 00:35:19.664 Vendor Specific: Not Supported 00:35:19.664 Reset Timeout: 7500 ms 00:35:19.664 Doorbell Stride: 4 bytes 00:35:19.664 NVM Subsystem Reset: Not Supported 00:35:19.664 Command Sets Supported 00:35:19.664 NVM Command Set: Supported 00:35:19.664 Boot Partition: Not Supported 00:35:19.664 Memory Page Size Minimum: 4096 bytes 00:35:19.664 Memory Page Size Maximum: 4096 bytes 00:35:19.664 Persistent Memory Region: Not Supported 00:35:19.664 Optional Asynchronous Events Supported 00:35:19.664 Namespace Attribute Notices: Supported 00:35:19.664 Firmware Activation Notices: Not Supported 00:35:19.664 ANA Change Notices: Supported 00:35:19.664 PLE Aggregate Log Change Notices: Not Supported 00:35:19.664 LBA Status Info Alert Notices: Not Supported 00:35:19.664 EGE Aggregate Log Change Notices: Not Supported 00:35:19.664 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.664 Zone Descriptor Change Notices: Not Supported 00:35:19.664 Discovery Log Change Notices: Not Supported 00:35:19.664 Controller Attributes 00:35:19.664 128-bit Host Identifier: Supported 00:35:19.664 Non-Operational Permissive Mode: Not Supported 00:35:19.664 NVM Sets: Not Supported 00:35:19.664 Read Recovery Levels: Not Supported 00:35:19.664 Endurance Groups: Not Supported 00:35:19.664 Predictable Latency Mode: Not Supported 00:35:19.664 Traffic Based Keep ALive: Supported 00:35:19.664 Namespace Granularity: Not Supported 00:35:19.664 SQ Associations: Not Supported 00:35:19.664 UUID List: Not Supported 00:35:19.664 Multi-Domain Subsystem: Not Supported 00:35:19.664 Fixed Capacity Management: Not Supported 00:35:19.664 Variable Capacity Management: Not Supported 00:35:19.664 Delete Endurance Group: Not Supported 00:35:19.665 Delete NVM Set: Not Supported 00:35:19.665 Extended LBA Formats Supported: Not Supported 00:35:19.665 Flexible Data Placement Supported: Not Supported 00:35:19.665 00:35:19.665 Controller Memory Buffer Support 00:35:19.665 ================================ 00:35:19.665 Supported: No 00:35:19.665 00:35:19.665 Persistent Memory Region Support 00:35:19.665 ================================ 00:35:19.665 Supported: No 00:35:19.665 00:35:19.665 Admin Command Set Attributes 00:35:19.665 ============================ 00:35:19.665 Security Send/Receive: Not Supported 00:35:19.665 Format NVM: Not Supported 00:35:19.665 Firmware Activate/Download: Not Supported 00:35:19.665 Namespace Management: Not Supported 00:35:19.665 Device Self-Test: Not Supported 00:35:19.665 Directives: Not Supported 00:35:19.665 NVMe-MI: Not Supported 00:35:19.665 Virtualization Management: Not Supported 00:35:19.665 Doorbell Buffer Config: Not Supported 00:35:19.665 Get LBA Status Capability: Not Supported 00:35:19.665 Command & Feature Lockdown Capability: Not Supported 00:35:19.665 Abort Command Limit: 4 00:35:19.665 Async Event Request Limit: 4 00:35:19.665 Number of Firmware Slots: N/A 00:35:19.665 Firmware Slot 1 Read-Only: N/A 00:35:19.665 Firmware Activation Without Reset: N/A 00:35:19.665 Multiple Update Detection Support: N/A 00:35:19.665 Firmware Update Granularity: No Information Provided 00:35:19.665 Per-Namespace SMART Log: Yes 00:35:19.665 Asymmetric Namespace Access Log Page: Supported 00:35:19.665 ANA Transition Time : 10 sec 00:35:19.665 00:35:19.665 Asymmetric Namespace Access Capabilities 00:35:19.665 ANA Optimized State : Supported 00:35:19.665 ANA Non-Optimized State : Supported 00:35:19.665 ANA Inaccessible State : Supported 00:35:19.665 ANA Persistent Loss State : Supported 00:35:19.665 ANA Change State : Supported 00:35:19.665 ANAGRPID is not changed : No 00:35:19.665 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:19.665 00:35:19.665 ANA Group Identifier Maximum : 128 00:35:19.665 Number of ANA Group Identifiers : 128 00:35:19.665 Max Number of Allowed Namespaces : 1024 00:35:19.665 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:19.665 Command Effects Log Page: Supported 00:35:19.665 Get Log Page Extended Data: Supported 00:35:19.665 Telemetry Log Pages: Not Supported 00:35:19.665 Persistent Event Log Pages: Not Supported 00:35:19.665 Supported Log Pages Log Page: May Support 00:35:19.665 Commands Supported & Effects Log Page: Not Supported 00:35:19.665 Feature Identifiers & Effects Log Page:May Support 00:35:19.665 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.665 Data Area 4 for Telemetry Log: Not Supported 00:35:19.665 Error Log Page Entries Supported: 128 00:35:19.665 Keep Alive: Supported 00:35:19.665 Keep Alive Granularity: 1000 ms 00:35:19.665 00:35:19.665 NVM Command Set Attributes 00:35:19.665 ========================== 00:35:19.665 Submission Queue Entry Size 00:35:19.665 Max: 64 00:35:19.665 Min: 64 00:35:19.665 Completion Queue Entry Size 00:35:19.665 Max: 16 00:35:19.665 Min: 16 00:35:19.665 Number of Namespaces: 1024 00:35:19.665 Compare Command: Not Supported 00:35:19.665 Write Uncorrectable Command: Not Supported 00:35:19.665 Dataset Management Command: Supported 00:35:19.665 Write Zeroes Command: Supported 00:35:19.665 Set Features Save Field: Not Supported 00:35:19.665 Reservations: Not Supported 00:35:19.665 Timestamp: Not Supported 00:35:19.665 Copy: Not Supported 00:35:19.665 Volatile Write Cache: Present 00:35:19.665 Atomic Write Unit (Normal): 1 00:35:19.665 Atomic Write Unit (PFail): 1 00:35:19.665 Atomic Compare & Write Unit: 1 00:35:19.665 Fused Compare & Write: Not Supported 00:35:19.665 Scatter-Gather List 00:35:19.665 SGL Command Set: Supported 00:35:19.665 SGL Keyed: Not Supported 00:35:19.665 SGL Bit Bucket Descriptor: Not Supported 00:35:19.665 SGL Metadata Pointer: Not Supported 00:35:19.665 Oversized SGL: Not Supported 00:35:19.665 SGL Metadata Address: Not Supported 00:35:19.665 SGL Offset: Supported 00:35:19.665 Transport SGL Data Block: Not Supported 00:35:19.665 Replay Protected Memory Block: Not Supported 00:35:19.665 00:35:19.665 Firmware Slot Information 00:35:19.665 ========================= 00:35:19.665 Active slot: 0 00:35:19.665 00:35:19.665 Asymmetric Namespace Access 00:35:19.665 =========================== 00:35:19.665 Change Count : 0 00:35:19.665 Number of ANA Group Descriptors : 1 00:35:19.665 ANA Group Descriptor : 0 00:35:19.665 ANA Group ID : 1 00:35:19.665 Number of NSID Values : 1 00:35:19.665 Change Count : 0 00:35:19.665 ANA State : 1 00:35:19.665 Namespace Identifier : 1 00:35:19.665 00:35:19.665 Commands Supported and Effects 00:35:19.665 ============================== 00:35:19.665 Admin Commands 00:35:19.665 -------------- 00:35:19.665 Get Log Page (02h): Supported 00:35:19.665 Identify (06h): Supported 00:35:19.665 Abort (08h): Supported 00:35:19.665 Set Features (09h): Supported 00:35:19.665 Get Features (0Ah): Supported 00:35:19.665 Asynchronous Event Request (0Ch): Supported 00:35:19.665 Keep Alive (18h): Supported 00:35:19.665 I/O Commands 00:35:19.665 ------------ 00:35:19.665 Flush (00h): Supported 00:35:19.665 Write (01h): Supported LBA-Change 00:35:19.665 Read (02h): Supported 00:35:19.665 Write Zeroes (08h): Supported LBA-Change 00:35:19.665 Dataset Management (09h): Supported 00:35:19.665 00:35:19.665 Error Log 00:35:19.665 ========= 00:35:19.665 Entry: 0 00:35:19.665 Error Count: 0x3 00:35:19.665 Submission Queue Id: 0x0 00:35:19.665 Command Id: 0x5 00:35:19.665 Phase Bit: 0 00:35:19.665 Status Code: 0x2 00:35:19.665 Status Code Type: 0x0 00:35:19.665 Do Not Retry: 1 00:35:19.665 Error Location: 0x28 00:35:19.665 LBA: 0x0 00:35:19.665 Namespace: 0x0 00:35:19.665 Vendor Log Page: 0x0 00:35:19.665 ----------- 00:35:19.665 Entry: 1 00:35:19.665 Error Count: 0x2 00:35:19.665 Submission Queue Id: 0x0 00:35:19.665 Command Id: 0x5 00:35:19.665 Phase Bit: 0 00:35:19.665 Status Code: 0x2 00:35:19.665 Status Code Type: 0x0 00:35:19.665 Do Not Retry: 1 00:35:19.665 Error Location: 0x28 00:35:19.665 LBA: 0x0 00:35:19.665 Namespace: 0x0 00:35:19.665 Vendor Log Page: 0x0 00:35:19.665 ----------- 00:35:19.665 Entry: 2 00:35:19.665 Error Count: 0x1 00:35:19.665 Submission Queue Id: 0x0 00:35:19.665 Command Id: 0x4 00:35:19.665 Phase Bit: 0 00:35:19.665 Status Code: 0x2 00:35:19.665 Status Code Type: 0x0 00:35:19.665 Do Not Retry: 1 00:35:19.665 Error Location: 0x28 00:35:19.665 LBA: 0x0 00:35:19.665 Namespace: 0x0 00:35:19.665 Vendor Log Page: 0x0 00:35:19.665 00:35:19.665 Number of Queues 00:35:19.665 ================ 00:35:19.665 Number of I/O Submission Queues: 128 00:35:19.665 Number of I/O Completion Queues: 128 00:35:19.665 00:35:19.665 ZNS Specific Controller Data 00:35:19.665 ============================ 00:35:19.665 Zone Append Size Limit: 0 00:35:19.665 00:35:19.665 00:35:19.665 Active Namespaces 00:35:19.665 ================= 00:35:19.665 get_feature(0x05) failed 00:35:19.665 Namespace ID:1 00:35:19.665 Command Set Identifier: NVM (00h) 00:35:19.665 Deallocate: Supported 00:35:19.665 Deallocated/Unwritten Error: Not Supported 00:35:19.665 Deallocated Read Value: Unknown 00:35:19.665 Deallocate in Write Zeroes: Not Supported 00:35:19.665 Deallocated Guard Field: 0xFFFF 00:35:19.665 Flush: Supported 00:35:19.665 Reservation: Not Supported 00:35:19.665 Namespace Sharing Capabilities: Multiple Controllers 00:35:19.665 Size (in LBAs): 1953525168 (931GiB) 00:35:19.665 Capacity (in LBAs): 1953525168 (931GiB) 00:35:19.665 Utilization (in LBAs): 1953525168 (931GiB) 00:35:19.665 UUID: 58c0078b-c1c7-4c6c-b081-b77effcdc085 00:35:19.665 Thin Provisioning: Not Supported 00:35:19.665 Per-NS Atomic Units: Yes 00:35:19.665 Atomic Boundary Size (Normal): 0 00:35:19.665 Atomic Boundary Size (PFail): 0 00:35:19.665 Atomic Boundary Offset: 0 00:35:19.665 NGUID/EUI64 Never Reused: No 00:35:19.665 ANA group ID: 1 00:35:19.665 Namespace Write Protected: No 00:35:19.665 Number of LBA Formats: 1 00:35:19.665 Current LBA Format: LBA Format #00 00:35:19.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:19.665 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.924 rmmod nvme_tcp 00:35:19.924 rmmod nvme_fabrics 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.924 00:08:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:21.835 00:08:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:21.835 00:08:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:23.217 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.217 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.217 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:24.150 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:24.409 00:35:24.409 real 0m9.781s 00:35:24.409 user 0m2.224s 00:35:24.409 sys 0m3.559s 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:24.409 ************************************ 00:35:24.409 END TEST nvmf_identify_kernel_target 00:35:24.409 ************************************ 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.409 ************************************ 00:35:24.409 START TEST nvmf_auth_host 00:35:24.409 ************************************ 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:24.409 * Looking for test storage... 00:35:24.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.409 --rc genhtml_branch_coverage=1 00:35:24.409 --rc genhtml_function_coverage=1 00:35:24.409 --rc genhtml_legend=1 00:35:24.409 --rc geninfo_all_blocks=1 00:35:24.409 --rc geninfo_unexecuted_blocks=1 00:35:24.409 00:35:24.409 ' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.409 --rc genhtml_branch_coverage=1 00:35:24.409 --rc genhtml_function_coverage=1 00:35:24.409 --rc genhtml_legend=1 00:35:24.409 --rc geninfo_all_blocks=1 00:35:24.409 --rc geninfo_unexecuted_blocks=1 00:35:24.409 00:35:24.409 ' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.409 --rc genhtml_branch_coverage=1 00:35:24.409 --rc genhtml_function_coverage=1 00:35:24.409 --rc genhtml_legend=1 00:35:24.409 --rc geninfo_all_blocks=1 00:35:24.409 --rc geninfo_unexecuted_blocks=1 00:35:24.409 00:35:24.409 ' 00:35:24.409 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.409 --rc genhtml_branch_coverage=1 00:35:24.410 --rc genhtml_function_coverage=1 00:35:24.410 --rc genhtml_legend=1 00:35:24.410 --rc geninfo_all_blocks=1 00:35:24.410 --rc geninfo_unexecuted_blocks=1 00:35:24.410 00:35:24.410 ' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.410 00:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.316 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:26.316 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:26.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:26.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:26.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.317 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:35:26.574 00:35:26.574 --- 10.0.0.2 ping statistics --- 00:35:26.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.574 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:35:26.574 00:35:26.574 --- 10.0.0.1 ping statistics --- 00:35:26.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.574 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:35:26.574 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3619485 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3619485 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3619485 ']' 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:26.575 00:08:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.954 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71b966e6fc51ad65b4b6a1e216640c8f 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vvA 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71b966e6fc51ad65b4b6a1e216640c8f 0 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71b966e6fc51ad65b4b6a1e216640c8f 0 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71b966e6fc51ad65b4b6a1e216640c8f 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vvA 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vvA 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vvA 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=888d6a18d4110f91f5b1d5ba22e1d9fb4ac7f6aabecb3af7a4e38d614bf96768 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XUn 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 888d6a18d4110f91f5b1d5ba22e1d9fb4ac7f6aabecb3af7a4e38d614bf96768 3 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 888d6a18d4110f91f5b1d5ba22e1d9fb4ac7f6aabecb3af7a4e38d614bf96768 3 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=888d6a18d4110f91f5b1d5ba22e1d9fb4ac7f6aabecb3af7a4e38d614bf96768 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XUn 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XUn 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XUn 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d05ee30156ed9da95f550a3d0ac5c3bbe78a59cced78591d 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.I8a 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d05ee30156ed9da95f550a3d0ac5c3bbe78a59cced78591d 0 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d05ee30156ed9da95f550a3d0ac5c3bbe78a59cced78591d 0 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d05ee30156ed9da95f550a3d0ac5c3bbe78a59cced78591d 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.I8a 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.I8a 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.I8a 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=47ea7401ad74a3008f1e32e1d9aa1c01a98849be4a683fe7 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.unR 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 47ea7401ad74a3008f1e32e1d9aa1c01a98849be4a683fe7 2 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 47ea7401ad74a3008f1e32e1d9aa1c01a98849be4a683fe7 2 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=47ea7401ad74a3008f1e32e1d9aa1c01a98849be4a683fe7 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:27.955 00:08:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.unR 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.unR 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.unR 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8dbed2799de5356b8fea519861f2c94 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.42J 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8dbed2799de5356b8fea519861f2c94 1 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8dbed2799de5356b8fea519861f2c94 1 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8dbed2799de5356b8fea519861f2c94 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.955 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.42J 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.42J 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.42J 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a6f9ba6e75c077ec375bd88daf0b2c8 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fqp 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a6f9ba6e75c077ec375bd88daf0b2c8 1 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a6f9ba6e75c077ec375bd88daf0b2c8 1 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a6f9ba6e75c077ec375bd88daf0b2c8 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fqp 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fqp 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fqp 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13d8e4885a701c69d83121a677b43ff186e2915e0eeee332 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QTo 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13d8e4885a701c69d83121a677b43ff186e2915e0eeee332 2 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13d8e4885a701c69d83121a677b43ff186e2915e0eeee332 2 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13d8e4885a701c69d83121a677b43ff186e2915e0eeee332 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QTo 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QTo 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.QTo 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2aad8c7699e243c5458e47704890eb6 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mZG 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2aad8c7699e243c5458e47704890eb6 0 00:35:27.956 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2aad8c7699e243c5458e47704890eb6 0 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2aad8c7699e243c5458e47704890eb6 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mZG 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mZG 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mZG 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2cb9753c8f9f47a4c20409e186e2b4855e0050298fd36066023c419f24336b4f 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.imN 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2cb9753c8f9f47a4c20409e186e2b4855e0050298fd36066023c419f24336b4f 3 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2cb9753c8f9f47a4c20409e186e2b4855e0050298fd36066023c419f24336b4f 3 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2cb9753c8f9f47a4c20409e186e2b4855e0050298fd36066023c419f24336b4f 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.imN 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.imN 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.imN 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3619485 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3619485 ']' 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:28.215 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vvA 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XUn ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XUn 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.I8a 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.unR ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.unR 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.42J 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fqp ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fqp 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.QTo 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mZG ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mZG 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.imN 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.474 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:28.475 00:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:29.413 Waiting for block devices as requested 00:35:29.413 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:29.672 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.672 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.930 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.930 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.930 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.930 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.190 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.190 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.190 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:30.190 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:30.449 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:30.449 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:30.449 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:30.709 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.709 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.709 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:31.279 No valid GPT data, bailing 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:31.279 00:35:31.279 Discovery Log Number of Records 2, Generation counter 2 00:35:31.279 =====Discovery Log Entry 0====== 00:35:31.279 trtype: tcp 00:35:31.279 adrfam: ipv4 00:35:31.279 subtype: current discovery subsystem 00:35:31.279 treq: not specified, sq flow control disable supported 00:35:31.279 portid: 1 00:35:31.279 trsvcid: 4420 00:35:31.279 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:31.279 traddr: 10.0.0.1 00:35:31.279 eflags: none 00:35:31.279 sectype: none 00:35:31.279 =====Discovery Log Entry 1====== 00:35:31.279 trtype: tcp 00:35:31.279 adrfam: ipv4 00:35:31.279 subtype: nvme subsystem 00:35:31.279 treq: not specified, sq flow control disable supported 00:35:31.279 portid: 1 00:35:31.279 trsvcid: 4420 00:35:31.279 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:31.279 traddr: 10.0.0.1 00:35:31.279 eflags: none 00:35:31.279 sectype: none 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:31.279 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.280 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.540 nvme0n1 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:31.540 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.541 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.802 nvme0n1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.802 00:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.060 nvme0n1 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.060 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.317 nvme0n1 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.317 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.575 nvme0n1 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.575 nvme0n1 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.575 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.834 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.835 00:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.835 nvme0n1 00:35:32.835 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.835 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.835 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.835 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.835 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.835 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 nvme0n1 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.093 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.352 nvme0n1 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.352 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.615 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.616 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.617 nvme0n1 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.617 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.881 00:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.881 nvme0n1 00:35:33.881 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.881 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.881 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.881 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.881 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.881 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.140 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.400 nvme0n1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.400 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.659 nvme0n1 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.659 00:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.918 nvme0n1 00:35:34.918 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.918 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.918 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.918 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.918 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.177 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.436 nvme0n1 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.436 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.695 nvme0n1 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.695 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.696 00:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.265 nvme0n1 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.265 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.532 00:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.101 nvme0n1 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.101 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.684 nvme0n1 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.684 00:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.257 nvme0n1 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.257 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.258 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.825 nvme0n1 00:35:38.825 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.825 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.825 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.825 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.826 00:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.772 nvme0n1 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.772 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.773 00:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.713 nvme0n1 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.713 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.714 00:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.089 nvme0n1 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:42.089 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.090 00:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.025 nvme0n1 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.025 00:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 nvme0n1 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.965 00:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 nvme0n1 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.965 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.966 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.224 nvme0n1 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:44.224 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.225 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 nvme0n1 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.483 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.484 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.743 nvme0n1 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.743 00:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.003 nvme0n1 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.003 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.264 nvme0n1 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.264 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.265 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.524 nvme0n1 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.524 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.525 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.783 nvme0n1 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.783 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.784 00:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.042 nvme0n1 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.042 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.301 nvme0n1 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.301 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.559 nvme0n1 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:46.559 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.819 00:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.079 nvme0n1 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.079 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.338 nvme0n1 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.338 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.339 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.908 nvme0n1 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.908 00:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.166 nvme0n1 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.166 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 nvme0n1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.736 00:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.305 nvme0n1 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.305 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.875 nvme0n1 00:35:49.875 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.875 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.876 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.876 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.876 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.876 00:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.876 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.442 nvme0n1 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.442 00:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.009 nvme0n1 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.009 00:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.948 nvme0n1 00:35:51.948 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.948 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.948 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.948 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.948 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.207 00:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.141 nvme0n1 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.141 00:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.086 nvme0n1 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.086 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.346 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.347 00:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.287 nvme0n1 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.287 00:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.228 nvme0n1 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.228 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.487 nvme0n1 00:35:56.487 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.487 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.487 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.487 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.487 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.487 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.488 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.748 nvme0n1 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.748 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.007 nvme0n1 00:35:57.007 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.007 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.007 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.007 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.007 00:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.007 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.008 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 nvme0n1 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 nvme0n1 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.526 nvme0n1 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.526 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.784 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.785 nvme0n1 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.785 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.044 00:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.044 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.045 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.304 nvme0n1 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.304 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.305 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.565 nvme0n1 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.565 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.566 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.566 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.826 nvme0n1 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.826 00:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.087 nvme0n1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.087 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.345 nvme0n1 00:35:59.345 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.345 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.345 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.345 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.345 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.345 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.605 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.606 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.606 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.606 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.606 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.606 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.606 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.865 nvme0n1 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.865 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.866 00:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.124 nvme0n1 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.124 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.691 nvme0n1 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.692 00:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.259 nvme0n1 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.259 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.260 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.829 nvme0n1 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.829 00:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.398 nvme0n1 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.398 00:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.967 nvme0n1 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:36:02.967 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.968 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.226 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:03.227 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.227 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.797 nvme0n1 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFiOTY2ZTZmYzUxYWQ2NWI0YjZhMWUyMTY2NDBjOGb2nrMU: 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg4ZDZhMThkNDExMGY5MWY1YjFkNWJhMjJlMWQ5ZmI0YWM3ZjZhYWJlY2IzYWY3YTRlMzhkNjE0YmY5Njc2ONrlu4M=: 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.797 00:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.734 nvme0n1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.734 00:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.676 nvme0n1 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.676 00:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.668 nvme0n1 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkOGU0ODg1YTcwMWM2OWQ4MzEyMWE2NzdiNDNmZjE4NmUyOTE1ZTBlZWVlMzMy8V8Vmw==: 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJhYWQ4Yzc2OTllMjQzYzU0NThlNDc3MDQ4OTBlYjZnpDx+: 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.668 00:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.052 nvme0n1 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNiOTc1M2M4ZjlmNDdhNGMyMDQwOWUxODZlMmI0ODU1ZTAwNTAyOThmZDM2MDY2MDIzYzQxOWYyNDMzNmI0ZmtnVBg=: 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.052 00:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.987 nvme0n1 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.987 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.988 00:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.988 request: 00:36:08.988 { 00:36:08.988 "name": "nvme0", 00:36:08.988 "trtype": "tcp", 00:36:08.988 "traddr": "10.0.0.1", 00:36:08.988 "adrfam": "ipv4", 00:36:08.988 "trsvcid": "4420", 00:36:08.988 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.988 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.988 "prchk_reftag": false, 00:36:08.988 "prchk_guard": false, 00:36:08.988 "hdgst": false, 00:36:08.988 "ddgst": false, 00:36:08.988 "allow_unrecognized_csi": false, 00:36:08.988 "method": "bdev_nvme_attach_controller", 00:36:08.988 "req_id": 1 00:36:08.988 } 00:36:08.988 Got JSON-RPC error response 00:36:08.988 response: 00:36:08.988 { 00:36:08.988 "code": -5, 00:36:08.988 "message": "Input/output error" 00:36:08.988 } 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.988 request: 00:36:08.988 { 00:36:08.988 "name": "nvme0", 00:36:08.988 "trtype": "tcp", 00:36:08.988 "traddr": "10.0.0.1", 00:36:08.988 "adrfam": "ipv4", 00:36:08.988 "trsvcid": "4420", 00:36:08.988 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.988 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.988 "prchk_reftag": false, 00:36:08.988 "prchk_guard": false, 00:36:08.988 "hdgst": false, 00:36:08.988 "ddgst": false, 00:36:08.988 "dhchap_key": "key2", 00:36:08.988 "allow_unrecognized_csi": false, 00:36:08.988 "method": "bdev_nvme_attach_controller", 00:36:08.988 "req_id": 1 00:36:08.988 } 00:36:08.988 Got JSON-RPC error response 00:36:08.988 response: 00:36:08.988 { 00:36:08.988 "code": -5, 00:36:08.988 "message": "Input/output error" 00:36:08.988 } 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.988 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:09.246 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.247 request: 00:36:09.247 { 00:36:09.247 "name": "nvme0", 00:36:09.247 "trtype": "tcp", 00:36:09.247 "traddr": "10.0.0.1", 00:36:09.247 "adrfam": "ipv4", 00:36:09.247 "trsvcid": "4420", 00:36:09.247 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:09.247 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:09.247 "prchk_reftag": false, 00:36:09.247 "prchk_guard": false, 00:36:09.247 "hdgst": false, 00:36:09.247 "ddgst": false, 00:36:09.247 "dhchap_key": "key1", 00:36:09.247 "dhchap_ctrlr_key": "ckey2", 00:36:09.247 "allow_unrecognized_csi": false, 00:36:09.247 "method": "bdev_nvme_attach_controller", 00:36:09.247 "req_id": 1 00:36:09.247 } 00:36:09.247 Got JSON-RPC error response 00:36:09.247 response: 00:36:09.247 { 00:36:09.247 "code": -5, 00:36:09.247 "message": "Input/output error" 00:36:09.247 } 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.247 nvme0n1 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.247 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.505 request: 00:36:09.505 { 00:36:09.505 "name": "nvme0", 00:36:09.505 "dhchap_key": "key1", 00:36:09.505 "dhchap_ctrlr_key": "ckey2", 00:36:09.505 "method": "bdev_nvme_set_keys", 00:36:09.505 "req_id": 1 00:36:09.505 } 00:36:09.505 Got JSON-RPC error response 00:36:09.505 response: 00:36:09.505 { 00:36:09.505 "code": -13, 00:36:09.505 "message": "Permission denied" 00:36:09.505 } 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:09.505 00:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:10.885 00:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:11.820 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.820 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:11.820 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA1ZWUzMDE1NmVkOWRhOTVmNTUwYTNkMGFjNWMzYmJlNzhhNTljY2VkNzg1OTFkIkBhkg==: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDdlYTc0MDFhZDc0YTMwMDhmMWUzMmUxZDlhYTFjMDFhOTg4NDliZTRhNjgzZmU3n/FAaQ==: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.821 nvme0n1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZThkYmVkMjc5OWRlNTM1NmI4ZmVhNTE5ODYxZjJjOTQt6uup: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2ZjliYTZlNzVjMDc3ZWMzNzViZDg4ZGFmMGIyYzhHktwP: 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.821 request: 00:36:11.821 { 00:36:11.821 "name": "nvme0", 00:36:11.821 "dhchap_key": "key2", 00:36:11.821 "dhchap_ctrlr_key": "ckey1", 00:36:11.821 "method": "bdev_nvme_set_keys", 00:36:11.821 "req_id": 1 00:36:11.821 } 00:36:11.821 Got JSON-RPC error response 00:36:11.821 response: 00:36:11.821 { 00:36:11.821 "code": -13, 00:36:11.821 "message": "Permission denied" 00:36:11.821 } 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:11.821 00:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:13.203 00:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.203 00:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:13.203 00:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.203 00:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.203 rmmod nvme_tcp 00:36:13.203 rmmod nvme_fabrics 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3619485 ']' 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3619485 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3619485 ']' 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3619485 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3619485 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3619485' 00:36:13.203 killing process with pid 3619485 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3619485 00:36:13.203 00:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3619485 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.150 00:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:16.057 00:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:17.434 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:17.434 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:17.434 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:18.371 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:18.371 00:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vvA /tmp/spdk.key-null.I8a /tmp/spdk.key-sha256.42J /tmp/spdk.key-sha384.QTo /tmp/spdk.key-sha512.imN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:18.371 00:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:19.744 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:19.744 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:19.744 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:19.744 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:19.744 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:19.744 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:19.744 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:19.744 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:19.744 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:19.744 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:19.744 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:19.744 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:19.744 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:19.744 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:19.744 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:19.744 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:19.744 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:19.744 00:36:19.744 real 0m55.364s 00:36:19.744 user 0m53.150s 00:36:19.744 sys 0m6.273s 00:36:19.744 00:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:19.744 00:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.744 ************************************ 00:36:19.745 END TEST nvmf_auth_host 00:36:19.745 ************************************ 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.745 ************************************ 00:36:19.745 START TEST nvmf_digest 00:36:19.745 ************************************ 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:19.745 * Looking for test storage... 00:36:19.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:36:19.745 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.003 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.004 --rc genhtml_branch_coverage=1 00:36:20.004 --rc genhtml_function_coverage=1 00:36:20.004 --rc genhtml_legend=1 00:36:20.004 --rc geninfo_all_blocks=1 00:36:20.004 --rc geninfo_unexecuted_blocks=1 00:36:20.004 00:36:20.004 ' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.004 --rc genhtml_branch_coverage=1 00:36:20.004 --rc genhtml_function_coverage=1 00:36:20.004 --rc genhtml_legend=1 00:36:20.004 --rc geninfo_all_blocks=1 00:36:20.004 --rc geninfo_unexecuted_blocks=1 00:36:20.004 00:36:20.004 ' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.004 --rc genhtml_branch_coverage=1 00:36:20.004 --rc genhtml_function_coverage=1 00:36:20.004 --rc genhtml_legend=1 00:36:20.004 --rc geninfo_all_blocks=1 00:36:20.004 --rc geninfo_unexecuted_blocks=1 00:36:20.004 00:36:20.004 ' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.004 --rc genhtml_branch_coverage=1 00:36:20.004 --rc genhtml_function_coverage=1 00:36:20.004 --rc genhtml_legend=1 00:36:20.004 --rc geninfo_all_blocks=1 00:36:20.004 --rc geninfo_unexecuted_blocks=1 00:36:20.004 00:36:20.004 ' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:20.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.004 00:09:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.004 00:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:20.004 00:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:20.004 00:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:20.004 00:09:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:21.909 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:21.909 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.909 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:21.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:21.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.910 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:36:22.168 00:36:22.168 --- 10.0.0.2 ping statistics --- 00:36:22.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.168 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:36:22.168 00:36:22.168 --- 10.0.0.1 ping statistics --- 00:36:22.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.168 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:22.168 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.168 ************************************ 00:36:22.168 START TEST nvmf_digest_clean 00:36:22.168 ************************************ 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3630243 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3630243 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3630243 ']' 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:22.169 00:09:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:22.169 [2024-11-10 00:09:48.287443] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:22.169 [2024-11-10 00:09:48.287613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.427 [2024-11-10 00:09:48.447168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.427 [2024-11-10 00:09:48.578852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.427 [2024-11-10 00:09:48.578938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.427 [2024-11-10 00:09:48.578963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.427 [2024-11-10 00:09:48.578987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.427 [2024-11-10 00:09:48.579007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.427 [2024-11-10 00:09:48.580617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.362 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.619 null0 00:36:23.619 [2024-11-10 00:09:49.667061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.619 [2024-11-10 00:09:49.691375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3630397 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3630397 /var/tmp/bperf.sock 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3630397 ']' 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:23.619 00:09:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.619 [2024-11-10 00:09:49.785502] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:23.619 [2024-11-10 00:09:49.785651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630397 ] 00:36:23.876 [2024-11-10 00:09:49.932010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.133 [2024-11-10 00:09:50.077748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.699 00:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:24.699 00:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:24.699 00:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:24.699 00:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:24.699 00:09:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:25.278 00:09:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.278 00:09:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.844 nvme0n1 00:36:25.844 00:09:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:25.844 00:09:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.844 Running I/O for 2 seconds... 00:36:28.151 13512.00 IOPS, 52.78 MiB/s [2024-11-09T23:09:54.352Z] 14015.50 IOPS, 54.75 MiB/s 00:36:28.151 Latency(us) 00:36:28.151 [2024-11-09T23:09:54.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:28.151 nvme0n1 : 2.01 14035.27 54.83 0.00 0.00 9107.41 4636.07 21165.70 00:36:28.151 [2024-11-09T23:09:54.352Z] =================================================================================================================== 00:36:28.151 [2024-11-09T23:09:54.352Z] Total : 14035.27 54.83 0.00 0.00 9107.41 4636.07 21165.70 00:36:28.151 { 00:36:28.151 "results": [ 00:36:28.151 { 00:36:28.151 "job": "nvme0n1", 00:36:28.151 "core_mask": "0x2", 00:36:28.151 "workload": "randread", 00:36:28.151 "status": "finished", 00:36:28.151 "queue_depth": 128, 00:36:28.151 "io_size": 4096, 00:36:28.151 "runtime": 2.006302, 00:36:28.151 "iops": 14035.274848950956, 00:36:28.151 "mibps": 54.82529237871467, 00:36:28.151 "io_failed": 0, 00:36:28.151 "io_timeout": 0, 00:36:28.151 "avg_latency_us": 9107.41394867505, 00:36:28.151 "min_latency_us": 4636.065185185185, 00:36:28.151 "max_latency_us": 21165.70074074074 00:36:28.151 } 00:36:28.151 ], 00:36:28.151 "core_count": 1 00:36:28.151 } 00:36:28.151 00:09:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:28.151 00:09:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:28.151 00:09:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:28.151 00:09:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:28.151 00:09:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:28.151 | select(.opcode=="crc32c") 00:36:28.151 | "\(.module_name) \(.executed)"' 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3630397 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3630397 ']' 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3630397 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3630397 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3630397' 00:36:28.151 killing process with pid 3630397 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3630397 00:36:28.151 Received shutdown signal, test time was about 2.000000 seconds 00:36:28.151 00:36:28.151 Latency(us) 00:36:28.151 [2024-11-09T23:09:54.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.151 [2024-11-09T23:09:54.352Z] =================================================================================================================== 00:36:28.151 [2024-11-09T23:09:54.352Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.151 00:09:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3630397 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3631062 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3631062 /var/tmp/bperf.sock 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3631062 ']' 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:29.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:29.085 00:09:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:29.085 [2024-11-10 00:09:55.244722] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:29.085 [2024-11-10 00:09:55.244873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631062 ] 00:36:29.085 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:29.085 Zero copy mechanism will not be used. 00:36:29.344 [2024-11-10 00:09:55.386335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.344 [2024-11-10 00:09:55.522998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.277 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:30.277 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:30.277 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:30.277 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:30.277 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:30.842 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.842 00:09:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.100 nvme0n1 00:36:31.100 00:09:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:31.100 00:09:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.359 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:31.359 Zero copy mechanism will not be used. 00:36:31.359 Running I/O for 2 seconds... 00:36:33.227 4471.00 IOPS, 558.88 MiB/s [2024-11-09T23:09:59.428Z] 4508.00 IOPS, 563.50 MiB/s 00:36:33.227 Latency(us) 00:36:33.227 [2024-11-09T23:09:59.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.227 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:33.227 nvme0n1 : 2.00 4507.33 563.42 0.00 0.00 3543.24 1092.27 10388.67 00:36:33.227 [2024-11-09T23:09:59.428Z] =================================================================================================================== 00:36:33.227 [2024-11-09T23:09:59.428Z] Total : 4507.33 563.42 0.00 0.00 3543.24 1092.27 10388.67 00:36:33.227 { 00:36:33.227 "results": [ 00:36:33.227 { 00:36:33.227 "job": "nvme0n1", 00:36:33.227 "core_mask": "0x2", 00:36:33.227 "workload": "randread", 00:36:33.227 "status": "finished", 00:36:33.227 "queue_depth": 16, 00:36:33.227 "io_size": 131072, 00:36:33.227 "runtime": 2.003847, 00:36:33.227 "iops": 4507.330150455598, 00:36:33.227 "mibps": 563.4162688069498, 00:36:33.227 "io_failed": 0, 00:36:33.227 "io_timeout": 0, 00:36:33.227 "avg_latency_us": 3543.23920349047, 00:36:33.227 "min_latency_us": 1092.2666666666667, 00:36:33.227 "max_latency_us": 10388.66962962963 00:36:33.227 } 00:36:33.227 ], 00:36:33.227 "core_count": 1 00:36:33.227 } 00:36:33.484 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:33.485 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:33.485 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:33.485 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:33.485 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:33.485 | select(.opcode=="crc32c") 00:36:33.485 | "\(.module_name) \(.executed)"' 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3631062 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3631062 ']' 00:36:33.742 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3631062 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3631062 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3631062' 00:36:33.743 killing process with pid 3631062 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3631062 00:36:33.743 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.743 00:36:33.743 Latency(us) 00:36:33.743 [2024-11-09T23:09:59.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.743 [2024-11-09T23:09:59.944Z] =================================================================================================================== 00:36:33.743 [2024-11-09T23:09:59.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.743 00:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3631062 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3631726 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3631726 /var/tmp/bperf.sock 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3631726 ']' 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.677 00:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.677 [2024-11-10 00:10:00.652627] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:34.677 [2024-11-10 00:10:00.652779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631726 ] 00:36:34.677 [2024-11-10 00:10:00.795216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.936 [2024-11-10 00:10:00.925459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.502 00:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:35.502 00:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:35.502 00:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:35.502 00:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:35.502 00:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:36.437 00:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.437 00:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.696 nvme0n1 00:36:36.696 00:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:36.696 00:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.696 Running I/O for 2 seconds... 00:36:39.016 15643.00 IOPS, 61.11 MiB/s [2024-11-09T23:10:05.217Z] 15861.00 IOPS, 61.96 MiB/s 00:36:39.016 Latency(us) 00:36:39.016 [2024-11-09T23:10:05.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.016 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:39.016 nvme0n1 : 2.01 15875.58 62.01 0.00 0.00 8052.54 3932.16 19029.71 00:36:39.016 [2024-11-09T23:10:05.217Z] =================================================================================================================== 00:36:39.016 [2024-11-09T23:10:05.217Z] Total : 15875.58 62.01 0.00 0.00 8052.54 3932.16 19029.71 00:36:39.016 { 00:36:39.016 "results": [ 00:36:39.016 { 00:36:39.016 "job": "nvme0n1", 00:36:39.016 "core_mask": "0x2", 00:36:39.016 "workload": "randwrite", 00:36:39.016 "status": "finished", 00:36:39.016 "queue_depth": 128, 00:36:39.016 "io_size": 4096, 00:36:39.016 "runtime": 2.006226, 00:36:39.016 "iops": 15875.579321571946, 00:36:39.016 "mibps": 62.013981724890414, 00:36:39.016 "io_failed": 0, 00:36:39.016 "io_timeout": 0, 00:36:39.016 "avg_latency_us": 8052.538904261876, 00:36:39.016 "min_latency_us": 3932.16, 00:36:39.016 "max_latency_us": 19029.712592592594 00:36:39.016 } 00:36:39.016 ], 00:36:39.016 "core_count": 1 00:36:39.016 } 00:36:39.016 00:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:39.016 00:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:39.016 00:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:39.016 00:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:39.016 00:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:39.016 | select(.opcode=="crc32c") 00:36:39.016 | "\(.module_name) \(.executed)"' 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3631726 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3631726 ']' 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3631726 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3631726 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3631726' 00:36:39.016 killing process with pid 3631726 00:36:39.016 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3631726 00:36:39.016 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.016 00:36:39.016 Latency(us) 00:36:39.017 [2024-11-09T23:10:05.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.017 [2024-11-09T23:10:05.218Z] =================================================================================================================== 00:36:39.017 [2024-11-09T23:10:05.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.017 00:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3631726 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3632269 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3632269 /var/tmp/bperf.sock 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3632269 ']' 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:39.958 00:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:39.958 [2024-11-10 00:10:06.114698] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:39.958 [2024-11-10 00:10:06.114822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632269 ] 00:36:39.958 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.958 Zero copy mechanism will not be used. 00:36:40.216 [2024-11-10 00:10:06.269958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.216 [2024-11-10 00:10:06.405847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.206 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:41.206 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:41.206 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:41.206 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:41.206 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:41.774 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.774 00:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.033 nvme0n1 00:36:42.033 00:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:42.033 00:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:42.033 Zero copy mechanism will not be used. 00:36:42.033 Running I/O for 2 seconds... 00:36:44.337 4624.00 IOPS, 578.00 MiB/s [2024-11-09T23:10:10.538Z] 5000.50 IOPS, 625.06 MiB/s 00:36:44.337 Latency(us) 00:36:44.337 [2024-11-09T23:10:10.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.337 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:44.337 nvme0n1 : 2.01 4998.12 624.77 0.00 0.00 3191.11 2184.53 5776.88 00:36:44.337 [2024-11-09T23:10:10.538Z] =================================================================================================================== 00:36:44.337 [2024-11-09T23:10:10.538Z] Total : 4998.12 624.77 0.00 0.00 3191.11 2184.53 5776.88 00:36:44.337 { 00:36:44.337 "results": [ 00:36:44.337 { 00:36:44.337 "job": "nvme0n1", 00:36:44.337 "core_mask": "0x2", 00:36:44.337 "workload": "randwrite", 00:36:44.337 "status": "finished", 00:36:44.337 "queue_depth": 16, 00:36:44.337 "io_size": 131072, 00:36:44.337 "runtime": 2.005153, 00:36:44.337 "iops": 4998.122337796667, 00:36:44.337 "mibps": 624.7652922245834, 00:36:44.337 "io_failed": 0, 00:36:44.337 "io_timeout": 0, 00:36:44.337 "avg_latency_us": 3191.113291795088, 00:36:44.337 "min_latency_us": 2184.5333333333333, 00:36:44.337 "max_latency_us": 5776.877037037037 00:36:44.337 } 00:36:44.337 ], 00:36:44.337 "core_count": 1 00:36:44.337 } 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:44.337 | select(.opcode=="crc32c") 00:36:44.337 | "\(.module_name) \(.executed)"' 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3632269 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3632269 ']' 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3632269 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3632269 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3632269' 00:36:44.337 killing process with pid 3632269 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3632269 00:36:44.337 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.337 00:36:44.337 Latency(us) 00:36:44.337 [2024-11-09T23:10:10.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.337 [2024-11-09T23:10:10.538Z] =================================================================================================================== 00:36:44.337 [2024-11-09T23:10:10.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.337 00:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3632269 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3630243 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3630243 ']' 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3630243 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3630243 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3630243' 00:36:45.273 killing process with pid 3630243 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3630243 00:36:45.273 00:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3630243 00:36:46.647 00:36:46.647 real 0m24.411s 00:36:46.647 user 0m47.700s 00:36:46.647 sys 0m4.790s 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:46.647 ************************************ 00:36:46.647 END TEST nvmf_digest_clean 00:36:46.647 ************************************ 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:46.647 ************************************ 00:36:46.647 START TEST nvmf_digest_error 00:36:46.647 ************************************ 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3633098 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3633098 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3633098 ']' 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:46.647 00:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.647 [2024-11-10 00:10:12.741547] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:46.647 [2024-11-10 00:10:12.741729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.905 [2024-11-10 00:10:12.895314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.905 [2024-11-10 00:10:13.023673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:46.905 [2024-11-10 00:10:13.023747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:46.905 [2024-11-10 00:10:13.023769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:46.905 [2024-11-10 00:10:13.023790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:46.905 [2024-11-10 00:10:13.023807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:46.905 [2024-11-10 00:10:13.025264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.838 [2024-11-10 00:10:13.771996] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.838 00:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.098 null0 00:36:48.098 [2024-11-10 00:10:14.119295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.098 [2024-11-10 00:10:14.143558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3633298 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3633298 /var/tmp/bperf.sock 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3633298 ']' 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:48.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:48.098 00:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.098 [2024-11-10 00:10:14.240279] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:48.098 [2024-11-10 00:10:14.240416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633298 ] 00:36:48.357 [2024-11-10 00:10:14.387620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.357 [2024-11-10 00:10:14.523410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.290 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:49.290 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:49.290 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.290 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:49.547 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:49.548 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.548 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.548 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.548 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.548 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:49.806 nvme0n1 00:36:49.806 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:49.806 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.806 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.806 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.806 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:49.806 00:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:50.066 Running I/O for 2 seconds... 00:36:50.066 [2024-11-10 00:10:16.149688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.149781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.149813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.066 [2024-11-10 00:10:16.168382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.168442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.168468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.066 [2024-11-10 00:10:16.188458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.188517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.188543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.066 [2024-11-10 00:10:16.202906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.202964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.202990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.066 [2024-11-10 00:10:16.222751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.222798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.222824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.066 [2024-11-10 00:10:16.243286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.243346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.066 [2024-11-10 00:10:16.264753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.066 [2024-11-10 00:10:16.264805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.066 [2024-11-10 00:10:16.264832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.286466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.286559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.302749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.302789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.302813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.323286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.323333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.323377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.342998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.343044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.343069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.359033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.359083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.359112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.378620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.378668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.378697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.397182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.397232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.413144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.413193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.413222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.434084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.434133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.434161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.456487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.456536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.456564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.477018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.477091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.477134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.493263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.493312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.493355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.325 [2024-11-10 00:10:16.515200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.325 [2024-11-10 00:10:16.515245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.325 [2024-11-10 00:10:16.515270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.537556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.537618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.537659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.560688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.560737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.560765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.581460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.581509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.581539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.597994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.598043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.598071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.615037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.615085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.615115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.637278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.637326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.637355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.652598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.652637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.652660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.670431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.670489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.670519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.688771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.688811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.688835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.708505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.708545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.708570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.725932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.725988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.726027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.744834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.744897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.759946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.760004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.760034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.584 [2024-11-10 00:10:16.778183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.584 [2024-11-10 00:10:16.778231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.584 [2024-11-10 00:10:16.778260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.798775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.798817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.798841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.816268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.816317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.816345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.833386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.833436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.833464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.850219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.850267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.850295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.868112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.868160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.868188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.888969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.889019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.889048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.904454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.904502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.904531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.921800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.921848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.921877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.941261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.941301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.941326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.958065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.958112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.958141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.975796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.975835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.975859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:16.996752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:16.996799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:16.996828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:17.013948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:17.013996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:17.014025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.842 [2024-11-10 00:10:17.029073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:50.842 [2024-11-10 00:10:17.029120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.842 [2024-11-10 00:10:17.029161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.100 [2024-11-10 00:10:17.049914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.100 [2024-11-10 00:10:17.049976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.100 [2024-11-10 00:10:17.050005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.100 [2024-11-10 00:10:17.068449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.100 [2024-11-10 00:10:17.068491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.068515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.083001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.083050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.083080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.103332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.103381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.103410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.123893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.123950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.123974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 13517.00 IOPS, 52.80 MiB/s [2024-11-09T23:10:17.302Z] [2024-11-10 00:10:17.141915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.141999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.162943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.163004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.163028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.180011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.180059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.180088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.197486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.197545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.197575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.214556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.214612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.214658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.231543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.231583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.248850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.248909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.248934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.267382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.267431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.267459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.101 [2024-11-10 00:10:17.284906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.101 [2024-11-10 00:10:17.284946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.101 [2024-11-10 00:10:17.284969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.359 [2024-11-10 00:10:17.302860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-10 00:10:17.302904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-10 00:10:17.302944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.359 [2024-11-10 00:10:17.318561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-10 00:10:17.318618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-10 00:10:17.318660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.359 [2024-11-10 00:10:17.338844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-10 00:10:17.338893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-10 00:10:17.338930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.359 [2024-11-10 00:10:17.359652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-10 00:10:17.359694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-10 00:10:17.359718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.359 [2024-11-10 00:10:17.381415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-10 00:10:17.381457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-10 00:10:17.381482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.359 [2024-11-10 00:10:17.396412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.359 [2024-11-10 00:10:17.396461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.359 [2024-11-10 00:10:17.396490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.416002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.416041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.416064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.438027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.438076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.438106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.455023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.455083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.455113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.472613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.472674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.472698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.490115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.490173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.490217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.505172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.505221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.505251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.521902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.521951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.521980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.542049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.542097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.542125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.360 [2024-11-10 00:10:17.559530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.360 [2024-11-10 00:10:17.559580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.360 [2024-11-10 00:10:17.559619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.580848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.580891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.580917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.596224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.596274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.596302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.617769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.617820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.617869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.638399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.638457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.638482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.654664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.654708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.654742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.676900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.676947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.676976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.697391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.697440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.697468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.719243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.719301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.719327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.739273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.739330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.739357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.755442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.755490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.774098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.774158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.774184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.793796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.793852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.793879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.618 [2024-11-10 00:10:17.808384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.618 [2024-11-10 00:10:17.808440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.618 [2024-11-10 00:10:17.808467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.828479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.828527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.828556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.847535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.847600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.847627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.863387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.863442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.886133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.886181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.886209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.906712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.906765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.906790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.926347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.926401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.926427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.947189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.947251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.947278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.962763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.962804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.962829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:17.982012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:17.982059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:17.982112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:18.002251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:18.002306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:18.002332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:18.017603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:18.017662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:18.017687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:18.039013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:18.039067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:18.039092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:18.059514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:18.059563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:18.059599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:51.876 [2024-11-10 00:10:18.075189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:51.876 [2024-11-10 00:10:18.075234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:51.876 [2024-11-10 00:10:18.075259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.134 [2024-11-10 00:10:18.098392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.134 [2024-11-10 00:10:18.098449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.134 [2024-11-10 00:10:18.098489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.134 [2024-11-10 00:10:18.116868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.134 [2024-11-10 00:10:18.116922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.134 [2024-11-10 00:10:18.116947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.134 13591.50 IOPS, 53.09 MiB/s [2024-11-09T23:10:18.335Z] [2024-11-10 00:10:18.133208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:52.134 [2024-11-10 00:10:18.133255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.134 [2024-11-10 00:10:18.133284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.134 00:36:52.134 Latency(us) 00:36:52.134 [2024-11-09T23:10:18.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:52.134 nvme0n1 : 2.01 13620.59 53.21 0.00 0.00 9382.83 4660.34 33981.63 00:36:52.134 [2024-11-09T23:10:18.335Z] =================================================================================================================== 00:36:52.134 [2024-11-09T23:10:18.335Z] Total : 13620.59 53.21 0.00 0.00 9382.83 4660.34 33981.63 00:36:52.134 { 00:36:52.134 "results": [ 00:36:52.134 { 00:36:52.134 "job": "nvme0n1", 00:36:52.134 "core_mask": "0x2", 00:36:52.134 "workload": "randread", 00:36:52.134 "status": "finished", 00:36:52.134 "queue_depth": 128, 00:36:52.134 "io_size": 4096, 00:36:52.134 "runtime": 2.008209, 00:36:52.134 "iops": 13620.594270815438, 00:36:52.134 "mibps": 53.205446370372805, 00:36:52.134 "io_failed": 0, 00:36:52.134 "io_timeout": 0, 00:36:52.134 "avg_latency_us": 9382.826099919977, 00:36:52.134 "min_latency_us": 4660.337777777778, 00:36:52.134 "max_latency_us": 33981.62962962963 00:36:52.134 } 00:36:52.134 ], 00:36:52.134 "core_count": 1 00:36:52.134 } 00:36:52.134 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:52.134 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:52.134 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:52.134 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:52.134 | .driver_specific 00:36:52.134 | .nvme_error 00:36:52.134 | .status_code 00:36:52.134 | .command_transient_transport_error' 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3633298 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3633298 ']' 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3633298 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3633298 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3633298' 00:36:52.391 killing process with pid 3633298 00:36:52.391 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3633298 00:36:52.391 Received shutdown signal, test time was about 2.000000 seconds 00:36:52.391 00:36:52.391 Latency(us) 00:36:52.391 [2024-11-09T23:10:18.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.391 [2024-11-09T23:10:18.592Z] =================================================================================================================== 00:36:52.391 [2024-11-09T23:10:18.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:52.392 00:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3633298 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3633921 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3633921 /var/tmp/bperf.sock 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3633921 ']' 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:53.325 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.326 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:53.326 00:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:53.326 [2024-11-10 00:10:19.444791] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:53.326 [2024-11-10 00:10:19.444934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633921 ] 00:36:53.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:53.326 Zero copy mechanism will not be used. 00:36:53.582 [2024-11-10 00:10:19.584956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.582 [2024-11-10 00:10:19.721361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.519 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:54.519 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:54.519 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.519 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:54.779 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:54.779 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.779 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.779 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.779 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:54.780 00:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.037 nvme0n1 00:36:55.037 00:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:55.038 00:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.038 00:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.038 00:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.038 00:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:55.038 00:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:55.298 Zero copy mechanism will not be used. 00:36:55.298 Running I/O for 2 seconds... 00:36:55.298 [2024-11-10 00:10:21.300203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.298 [2024-11-10 00:10:21.300285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.298 [2024-11-10 00:10:21.300334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.298 [2024-11-10 00:10:21.307384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.298 [2024-11-10 00:10:21.307437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.307467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.314118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.314167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.314197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.320763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.320807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.327304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.327352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.327382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.333736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.333782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.333808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.340256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.340304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.340333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.346816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.346858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.346892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.353423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.353471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.353500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.360087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.360135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.360197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.366602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.366664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.366690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.372865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.372926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.372955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.379549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.379605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.379636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.385966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.386013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.386043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.392201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.392248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.392277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.398496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.398543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.398573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.404888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.404947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.404977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.411238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.411286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.411315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.417241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.417288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.417317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.421250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.421296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.421325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.427462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.427509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.427537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.433652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.433696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.433721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.440091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.440138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.440167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.446418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.446464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.299 [2024-11-10 00:10:21.446493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.299 [2024-11-10 00:10:21.452601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.299 [2024-11-10 00:10:21.452648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.452701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.458705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.458746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.458771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.464741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.464781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.464805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.471230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.471277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.471306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.477394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.477441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.477471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.483791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.483839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.483868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.490199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.490247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.300 [2024-11-10 00:10:21.496568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.300 [2024-11-10 00:10:21.496627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.300 [2024-11-10 00:10:21.496670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.503038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.503091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.503136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.510197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.510246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.510275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.517425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.517474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.517504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.525261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.525310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.525340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.532266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.532314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.532343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.539172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.539221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.539251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.546568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.546626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.546671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.554128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.554177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.562080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.562130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.562159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.569471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.569520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.559 [2024-11-10 00:10:21.569574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.559 [2024-11-10 00:10:21.576794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.559 [2024-11-10 00:10:21.576839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.576865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.584565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.584622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.584682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.591749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.591791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.591817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.598981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.599030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.599060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.607350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.607398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.607427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.617014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.617065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.617094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.626021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.626071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.626101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.634600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.634663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.634688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.642250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.642303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.642343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.649705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.649750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.649776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.657239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.657307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.657337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.664204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.664252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.664281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.671308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.671357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.671386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.679055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.679104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.679134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.685875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.685937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.685966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.692700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.692743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.692768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.699430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.699477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.699530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.706329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.706378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.706407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.713357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.713407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.713436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.720372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.720420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.720450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.727273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.727321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.727350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.734345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.734394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.734423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.740796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.740839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.740865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.747607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.560 [2024-11-10 00:10:21.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.560 [2024-11-10 00:10:21.747691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.560 [2024-11-10 00:10:21.755106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.561 [2024-11-10 00:10:21.755150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.561 [2024-11-10 00:10:21.755178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.762782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.762851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.762879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.769769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.769814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.769841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.776974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.777024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.777069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.784106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.784154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.784183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.792660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.792705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.792732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.799952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.800016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.800045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.804949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.804996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.805026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.810207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.810254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.810285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.816545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.816603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.816657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.823522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.823571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.823614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.820 [2024-11-10 00:10:21.829944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.820 [2024-11-10 00:10:21.829999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.820 [2024-11-10 00:10:21.830030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.837496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.837544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.837573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.846279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.846329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.846358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.855022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.855072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.855102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.863849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.863910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.863953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.872753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.872798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.872823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.881323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.881372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.881402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.890112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.890172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.890202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.898983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.899032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.899062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.907801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.907845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.907871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.916518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.916567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.916622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.925323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.925372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.925401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.934157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.934207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.934237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.942796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.942839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.942866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.951430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.951478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.951507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.960093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.960144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.960173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.968856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.968917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.968946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.976418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.976467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.976496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.983253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.983301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.983329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.990451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.990501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.990530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:21.997498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:21.997546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:21.997574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:22.004858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:22.004921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:22.004951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:22.012030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:22.012079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:22.012108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:55.821 [2024-11-10 00:10:22.019097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:55.821 [2024-11-10 00:10:22.019162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:55.821 [2024-11-10 00:10:22.019206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.025802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.025851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.025876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.033600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.033648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.033677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.042268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.042317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.042346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.049611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.049679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.049703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.056920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.056981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.064175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.064226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.064255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.071247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.071338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.079181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.079230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.079259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.087258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.087318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.087349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.095661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.095708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.081 [2024-11-10 00:10:22.095735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.081 [2024-11-10 00:10:22.103147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.081 [2024-11-10 00:10:22.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.103227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.111233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.111282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.111314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.119220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.119271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.119301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.128123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.128173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.128203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.136448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.136497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.143825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.143869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.143896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.151021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.151070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.151098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.158415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.158472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.158512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.165998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.166047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.166076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.173368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.173416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.173445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.180751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.180818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.187843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.187901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.187926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.195198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.195246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.195275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.202145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.202194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.202222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.209397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.209446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.209475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.217926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.217975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.218004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.225922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.225987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.226017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.233105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.233153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.233182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.240452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.240500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.240529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.247735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.247779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.247805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.255056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.255104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.255134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.262212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.262260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.262289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.268973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.269021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.269050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.082 [2024-11-10 00:10:22.276119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.082 [2024-11-10 00:10:22.276167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.082 [2024-11-10 00:10:22.276196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.284708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.284749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.284781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.292901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.292950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.292980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.347 4237.00 IOPS, 529.62 MiB/s [2024-11-09T23:10:22.548Z] [2024-11-10 00:10:22.302140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.302188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.302252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.310997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.311047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.311076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.318568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.318649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.318677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.325028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.325075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.325103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.331201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.331245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.331279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.347 [2024-11-10 00:10:22.337667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.347 [2024-11-10 00:10:22.337712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.347 [2024-11-10 00:10:22.337738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.344275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.344353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.350805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.350849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.350886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.357333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.357380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.357410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.363841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.363907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.363932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.370279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.370379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.376766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.376807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.376832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.383090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.383138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.390184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.390232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.390261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.397335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.397384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.397414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.403747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.403789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.403838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.410022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.410070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.410099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.416281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.416328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.416357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.422788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.422845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.422869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.429100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.429148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.429177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.435373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.435420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.435449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.441685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.441728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.447849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.447905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.447928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.454112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.454161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.454189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.460671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.460712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.460737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.466968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.467015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.467043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.348 [2024-11-10 00:10:22.473385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.348 [2024-11-10 00:10:22.473432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.348 [2024-11-10 00:10:22.473462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.479850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.479897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.479926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.486672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.486731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.493706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.493749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.493775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.502213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.502261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.502291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.510032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.510080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.510109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.517294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.517342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.517396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.522548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.522603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.522647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.528058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.528105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.528146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.349 [2024-11-10 00:10:22.535822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.349 [2024-11-10 00:10:22.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.349 [2024-11-10 00:10:22.535908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.545229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.545288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.545314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.553804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.553850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.553891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.563380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.563430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.563459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.572755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.572798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.581660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.581705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.581748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.590736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.590777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.590802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.598106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.598155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.598185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.604848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.604922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.604948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.611974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.612022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.612051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.619263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.619311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.619341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.626449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.626496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.626525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.633523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.633572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.633611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.640710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.640752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.640777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.647717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.647774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.647809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.654881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.654944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.662155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.662203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.662232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.669189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.669237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.669266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.676415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.676463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.676491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.683693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.683751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.683777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.690742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.615 [2024-11-10 00:10:22.690800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.615 [2024-11-10 00:10:22.690824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.615 [2024-11-10 00:10:22.697968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.698015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.705076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.705123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.705152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.712419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.712467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.712496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.719570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.719630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.719673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.727253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.727301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.727330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.734890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.734948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.734990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.742320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.742368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.742397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.749065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.749114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.749143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.753389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.753442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.753473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.760269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.760316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.760345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.766995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.767043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.767081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.773860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.773918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.780566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.780623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.780667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.787375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.787422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.787451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.794304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.794352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.794382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.800997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.801045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.801074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.807907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.807948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.807993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.616 [2024-11-10 00:10:22.814584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.616 [2024-11-10 00:10:22.814640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.616 [2024-11-10 00:10:22.814682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.821644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.821690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.821717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.828364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.828413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.828443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.834672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.834711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.834736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.841040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.841088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.841116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.847249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.847296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.847325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.853787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.853829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.853856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.860141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.860188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.860217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.866256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.866303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.866332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.872307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.872353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.872383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.878661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.878702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.878735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.885136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.885183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.885212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.892035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.892076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.892101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.899318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.899367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.899397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.906713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.906755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.906780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.914501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.914551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.914580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.921538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.921597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.921629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.928673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.928717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.937515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.937564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.937617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.946327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.946384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.946414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.955018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.955066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.955095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.963776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.875 [2024-11-10 00:10:22.963821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.875 [2024-11-10 00:10:22.963847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.875 [2024-11-10 00:10:22.972554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:22.972612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:22.972656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:22.981299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:22.981348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:22.981378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:22.989985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:22.990033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:22.990063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:22.998740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:22.998799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:22.998825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.007324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.007372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.007401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.016037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.016086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.016143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.025045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.025094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.025123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.033975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.034025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.034054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.042691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.042736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.042762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.051337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.051386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.051415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.060034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.060082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.060111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.067553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.067609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.067640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.876 [2024-11-10 00:10:23.074219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:56.876 [2024-11-10 00:10:23.074268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.876 [2024-11-10 00:10:23.074298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.081413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.081462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.081491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.088905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.088961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.088991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.096478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.096526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.096555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.103304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.103357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.103386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.110458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.110534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.116948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.116995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.117024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.123830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.123891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.123917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.130980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.131029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.131058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.138603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.138667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.138694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.146449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.146499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.146551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.153918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.153968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.153997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.161494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.161548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.161578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.169124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.135 [2024-11-10 00:10:23.169176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.135 [2024-11-10 00:10:23.169206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.135 [2024-11-10 00:10:23.176340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.176388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.176417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.180692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.180747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.180771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.188044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.188093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.188122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.195298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.195348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.195378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.203267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.203317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.203346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.211259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.211317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.211347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.219174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.219224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.219254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.227073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.227123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.227186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.235061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.235110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.235140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.242808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.242866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.242907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.251134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.251183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.251213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.258734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.258794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.258820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.266149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.266197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.266227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.272926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.272975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.273019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.280327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.280371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.280397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.288011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.288060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.288090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.136 [2024-11-10 00:10:23.294960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.295008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.295036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.136 4262.50 IOPS, 532.81 MiB/s [2024-11-09T23:10:23.337Z] [2024-11-10 00:10:23.303422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:57.136 [2024-11-10 00:10:23.303471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.136 [2024-11-10 00:10:23.303499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.136 00:36:57.136 Latency(us) 00:36:57.136 [2024-11-09T23:10:23.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.136 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:57.136 nvme0n1 : 2.00 4261.17 532.65 0.00 0.00 3747.38 1013.38 11068.30 00:36:57.136 [2024-11-09T23:10:23.337Z] =================================================================================================================== 00:36:57.136 [2024-11-09T23:10:23.337Z] Total : 4261.17 532.65 0.00 0.00 3747.38 1013.38 11068.30 00:36:57.136 { 00:36:57.136 "results": [ 00:36:57.136 { 00:36:57.136 "job": "nvme0n1", 00:36:57.136 "core_mask": "0x2", 00:36:57.136 "workload": "randread", 00:36:57.136 "status": "finished", 00:36:57.136 "queue_depth": 16, 00:36:57.136 "io_size": 131072, 00:36:57.136 "runtime": 2.00438, 00:36:57.136 "iops": 4261.168041988046, 00:36:57.136 "mibps": 532.6460052485057, 00:36:57.136 "io_failed": 0, 00:36:57.136 "io_timeout": 0, 00:36:57.136 "avg_latency_us": 3747.3834324196573, 00:36:57.136 "min_latency_us": 1013.3807407407407, 00:36:57.136 "max_latency_us": 11068.302222222223 00:36:57.136 } 00:36:57.136 ], 00:36:57.136 "core_count": 1 00:36:57.136 } 00:36:57.137 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:57.137 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:57.137 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:57.137 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:57.137 | .driver_specific 00:36:57.137 | .nvme_error 00:36:57.137 | .status_code 00:36:57.137 | .command_transient_transport_error' 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 276 > 0 )) 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3633921 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3633921 ']' 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3633921 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3633921 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3633921' 00:36:57.703 killing process with pid 3633921 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3633921 00:36:57.703 Received shutdown signal, test time was about 2.000000 seconds 00:36:57.703 00:36:57.703 Latency(us) 00:36:57.703 [2024-11-09T23:10:23.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.703 [2024-11-09T23:10:23.904Z] =================================================================================================================== 00:36:57.703 [2024-11-09T23:10:23.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.703 00:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3633921 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3634581 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3634581 /var/tmp/bperf.sock 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3634581 ']' 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:58.638 00:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:58.638 [2024-11-10 00:10:24.608534] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:36:58.638 [2024-11-10 00:10:24.608695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634581 ] 00:36:58.638 [2024-11-10 00:10:24.743543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.896 [2024-11-10 00:10:24.867220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.461 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:59.461 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:59.461 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.461 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:59.719 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:59.719 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.719 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.719 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.719 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:59.719 00:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:00.284 nvme0n1 00:37:00.284 00:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:00.284 00:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.284 00:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:00.284 00:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.284 00:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:00.284 00:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:00.284 Running I/O for 2 seconds... 00:37:00.284 [2024-11-10 00:10:26.452562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:37:00.285 [2024-11-10 00:10:26.454191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.285 [2024-11-10 00:10:26.454267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:00.285 [2024-11-10 00:10:26.469538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:37:00.285 [2024-11-10 00:10:26.470629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.285 [2024-11-10 00:10:26.470685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.489299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:37:00.543 [2024-11-10 00:10:26.491772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.491809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.500958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:37:00.543 [2024-11-10 00:10:26.501982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.502044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.521698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:37:00.543 [2024-11-10 00:10:26.523904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.523942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.538346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:37:00.543 [2024-11-10 00:10:26.540573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.540636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.549483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:37:00.543 [2024-11-10 00:10:26.550514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.566743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:37:00.543 [2024-11-10 00:10:26.567887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.567924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.583754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:37:00.543 [2024-11-10 00:10:26.585124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.585176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.601483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:37:00.543 [2024-11-10 00:10:26.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.602817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.620355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:37:00.543 [2024-11-10 00:10:26.622846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.622897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.631689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:37:00.543 [2024-11-10 00:10:26.632641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.632678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.647718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:37:00.543 [2024-11-10 00:10:26.648765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.648819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.665881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:37:00.543 [2024-11-10 00:10:26.667178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.667236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.684494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:37:00.543 [2024-11-10 00:10:26.686038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.686111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.702817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:37:00.543 [2024-11-10 00:10:26.704523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.704581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.720799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:37:00.543 [2024-11-10 00:10:26.722685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.722738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:00.543 [2024-11-10 00:10:26.738363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:37:00.543 [2024-11-10 00:10:26.740417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.543 [2024-11-10 00:10:26.740454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.755245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:37:00.802 [2024-11-10 00:10:26.757230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.757283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.771534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:37:00.802 [2024-11-10 00:10:26.773685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.773725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.788766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:00.802 [2024-11-10 00:10:26.790803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.790859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.803521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:37:00.802 [2024-11-10 00:10:26.805295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.805332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.819765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:37:00.802 [2024-11-10 00:10:26.821006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.821047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.836097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:37:00.802 [2024-11-10 00:10:26.837927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.837976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.853252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:37:00.802 [2024-11-10 00:10:26.854744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.854783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.869134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:37:00.802 [2024-11-10 00:10:26.870534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.870604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.886802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:37:00.802 [2024-11-10 00:10:26.888435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.888492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.907367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:37:00.802 [2024-11-10 00:10:26.909889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.909942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.919295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:37:00.802 [2024-11-10 00:10:26.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.920657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.939901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:37:00.802 [2024-11-10 00:10:26.941981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.942034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.955315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:37:00.802 [2024-11-10 00:10:26.957197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.802 [2024-11-10 00:10:26.957241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:00.802 [2024-11-10 00:10:26.972235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:37:00.802 [2024-11-10 00:10:26.973976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.803 [2024-11-10 00:10:26.974030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:00.803 [2024-11-10 00:10:26.989633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:37:00.803 [2024-11-10 00:10:26.991610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:00.803 [2024-11-10 00:10:26.991664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.004810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:37:01.059 [2024-11-10 00:10:27.006178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.059 [2024-11-10 00:10:27.006231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.022998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:37:01.059 [2024-11-10 00:10:27.025190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.059 [2024-11-10 00:10:27.025246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.034780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:37:01.059 [2024-11-10 00:10:27.035715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.059 [2024-11-10 00:10:27.035752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.051841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:37:01.059 [2024-11-10 00:10:27.053002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.059 [2024-11-10 00:10:27.053061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.070119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:37:01.059 [2024-11-10 00:10:27.071543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.059 [2024-11-10 00:10:27.071603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.087028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:37:01.059 [2024-11-10 00:10:27.088624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.059 [2024-11-10 00:10:27.088679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:01.059 [2024-11-10 00:10:27.103103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:37:01.060 [2024-11-10 00:10:27.104881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.104916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.119537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:37:01.060 [2024-11-10 00:10:27.121363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.121417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.136331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:37:01.060 [2024-11-10 00:10:27.138105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.138159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.152389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:37:01.060 [2024-11-10 00:10:27.154379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.154433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.168949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:37:01.060 [2024-11-10 00:10:27.170894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.170930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.182854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:37:01.060 [2024-11-10 00:10:27.185162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.185205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.196775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:37:01.060 [2024-11-10 00:10:27.197698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.197755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.216722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:37:01.060 [2024-11-10 00:10:27.218321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.218380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.232023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:37:01.060 [2024-11-10 00:10:27.233630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.233682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:01.060 [2024-11-10 00:10:27.249131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:37:01.060 [2024-11-10 00:10:27.250972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.060 [2024-11-10 00:10:27.251030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:01.317 [2024-11-10 00:10:27.265539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:37:01.318 [2024-11-10 00:10:27.266834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.266888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.280493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:37:01.318 [2024-11-10 00:10:27.282078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.282121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.296657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:37:01.318 [2024-11-10 00:10:27.297836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.297872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.311857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:37:01.318 [2024-11-10 00:10:27.313074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.313127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.330170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:37:01.318 [2024-11-10 00:10:27.331620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.331672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.346964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:37:01.318 [2024-11-10 00:10:27.348604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.348658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.363505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:37:01.318 [2024-11-10 00:10:27.365195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.365252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.380396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:37:01.318 [2024-11-10 00:10:27.382212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.382264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.393675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:37:01.318 [2024-11-10 00:10:27.394677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.394712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.410866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:37:01.318 [2024-11-10 00:10:27.412119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.412174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.427893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:37:01.318 [2024-11-10 00:10:27.429106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.429162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:01.318 15303.00 IOPS, 59.78 MiB/s [2024-11-09T23:10:27.519Z] [2024-11-10 00:10:27.448126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:37:01.318 [2024-11-10 00:10:27.450524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.450579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.459643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:37:01.318 [2024-11-10 00:10:27.460559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.460619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.474861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:37:01.318 [2024-11-10 00:10:27.475841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.475877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.492248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:37:01.318 [2024-11-10 00:10:27.493398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.493461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:01.318 [2024-11-10 00:10:27.512213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:37:01.318 [2024-11-10 00:10:27.514046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.318 [2024-11-10 00:10:27.514100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.527536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:37:01.577 [2024-11-10 00:10:27.529376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.529430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.544149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:37:01.577 [2024-11-10 00:10:27.545930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.545983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.560934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:37:01.577 [2024-11-10 00:10:27.562766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.562802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.574496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:37:01.577 [2024-11-10 00:10:27.575308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.575362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.595024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:37:01.577 [2024-11-10 00:10:27.597433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.597485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.606551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:37:01.577 [2024-11-10 00:10:27.607541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.607600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.621841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:37:01.577 [2024-11-10 00:10:27.622813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.622848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.638829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:37:01.577 [2024-11-10 00:10:27.639943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.639978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.655667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:37:01.577 [2024-11-10 00:10:27.657017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.657072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.672720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:37:01.577 [2024-11-10 00:10:27.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.674355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.689706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:37:01.577 [2024-11-10 00:10:27.691513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.691567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.706287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:37:01.577 [2024-11-10 00:10:27.708152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.708207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.720315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:37:01.577 [2024-11-10 00:10:27.721329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.721382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.736903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:37:01.577 [2024-11-10 00:10:27.738326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.753740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:37:01.577 [2024-11-10 00:10:27.755147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.755201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:01.577 [2024-11-10 00:10:27.769543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:37:01.577 [2024-11-10 00:10:27.770404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.577 [2024-11-10 00:10:27.770442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.788657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:37:01.835 [2024-11-10 00:10:27.790751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.790789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.806273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:37:01.835 [2024-11-10 00:10:27.808545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.808608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.823675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:37:01.835 [2024-11-10 00:10:27.826187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.826240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.835281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:37:01.835 [2024-11-10 00:10:27.836316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.836372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.851518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:37:01.835 [2024-11-10 00:10:27.852816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.852856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.872036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:37:01.835 [2024-11-10 00:10:27.873914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.873951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.887463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:37:01.835 [2024-11-10 00:10:27.889349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.889403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.904549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:37:01.835 [2024-11-10 00:10:27.906635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.906672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.917533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:37:01.835 [2024-11-10 00:10:27.918777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.918820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.934696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:37:01.835 [2024-11-10 00:10:27.936161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.936204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.952080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:37:01.835 [2024-11-10 00:10:27.953727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.953763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.969507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:01.835 [2024-11-10 00:10:27.971387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.835 [2024-11-10 00:10:27.971442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:01.835 [2024-11-10 00:10:27.986776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:37:01.835 [2024-11-10 00:10:27.988811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.836 [2024-11-10 00:10:27.988863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:01.836 [2024-11-10 00:10:28.004030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:37:01.836 [2024-11-10 00:10:28.006316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.836 [2024-11-10 00:10:28.006374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:01.836 [2024-11-10 00:10:28.015811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:37:01.836 [2024-11-10 00:10:28.016795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.836 [2024-11-10 00:10:28.016830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:01.836 [2024-11-10 00:10:28.032810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:01.836 [2024-11-10 00:10:28.034043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:01.836 [2024-11-10 00:10:28.034098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.050063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:37:02.094 [2024-11-10 00:10:28.051516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.067399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:37:02.094 [2024-11-10 00:10:28.069046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.069098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.084843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:37:02.094 [2024-11-10 00:10:28.086739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.086774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.101527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:02.094 [2024-11-10 00:10:28.103394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.103448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.117369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:37:02.094 [2024-11-10 00:10:28.118778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.118832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.133230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:37:02.094 [2024-11-10 00:10:28.135037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.135086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.149658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:37:02.094 [2024-11-10 00:10:28.151104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.151157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.166356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:37:02.094 [2024-11-10 00:10:28.167689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.167727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.181661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:37:02.094 [2024-11-10 00:10:28.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.184000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.196755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:37:02.094 [2024-11-10 00:10:28.197795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.197839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.213659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:37:02.094 [2024-11-10 00:10:28.214884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.214922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.229751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:37:02.094 [2024-11-10 00:10:28.231168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.231223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.246810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:37:02.094 [2024-11-10 00:10:28.248416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.248471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.263873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:37:02.094 [2024-11-10 00:10:28.265740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.265777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:02.094 [2024-11-10 00:10:28.280315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:37:02.094 [2024-11-10 00:10:28.282176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.094 [2024-11-10 00:10:28.282231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.297033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:37:02.353 [2024-11-10 00:10:28.298849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.298901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.313171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:37:02.353 [2024-11-10 00:10:28.315170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.329705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:37:02.353 [2024-11-10 00:10:28.331725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.331762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.343850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:37:02.353 [2024-11-10 00:10:28.346255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.346299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.360420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:37:02.353 [2024-11-10 00:10:28.362314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.362358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.376810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:37:02.353 [2024-11-10 00:10:28.378189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.378241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.394142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:37:02.353 [2024-11-10 00:10:28.396010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.396054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.411384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:37:02.353 [2024-11-10 00:10:28.413481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.413534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:02.353 [2024-11-10 00:10:28.428390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:37:02.353 [2024-11-10 00:10:28.430519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.430574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:02.353 15462.50 IOPS, 60.40 MiB/s [2024-11-09T23:10:28.554Z] [2024-11-10 00:10:28.441924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:37:02.353 [2024-11-10 00:10:28.442968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:02.353 [2024-11-10 00:10:28.443024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:02.353 00:37:02.353 Latency(us) 00:37:02.353 [2024-11-09T23:10:28.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.353 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:02.353 nvme0n1 : 2.01 15461.61 60.40 0.00 0.00 8267.96 4296.25 23204.60 00:37:02.353 [2024-11-09T23:10:28.554Z] =================================================================================================================== 00:37:02.353 [2024-11-09T23:10:28.554Z] Total : 15461.61 60.40 0.00 0.00 8267.96 4296.25 23204.60 00:37:02.353 { 00:37:02.353 "results": [ 00:37:02.353 { 00:37:02.353 "job": "nvme0n1", 00:37:02.353 "core_mask": "0x2", 00:37:02.353 "workload": "randwrite", 00:37:02.353 "status": "finished", 00:37:02.353 "queue_depth": 128, 00:37:02.353 "io_size": 4096, 00:37:02.353 "runtime": 2.011628, 00:37:02.353 "iops": 15461.606221428614, 00:37:02.353 "mibps": 60.39689930245552, 00:37:02.353 "io_failed": 0, 00:37:02.353 "io_timeout": 0, 00:37:02.353 "avg_latency_us": 8267.958473173363, 00:37:02.353 "min_latency_us": 4296.248888888889, 00:37:02.353 "max_latency_us": 23204.59851851852 00:37:02.353 } 00:37:02.353 ], 00:37:02.353 "core_count": 1 00:37:02.353 } 00:37:02.353 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:02.353 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:02.353 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:02.354 | .driver_specific 00:37:02.354 | .nvme_error 00:37:02.354 | .status_code 00:37:02.354 | .command_transient_transport_error' 00:37:02.354 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3634581 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3634581 ']' 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3634581 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3634581 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3634581' 00:37:02.617 killing process with pid 3634581 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3634581 00:37:02.617 Received shutdown signal, test time was about 2.000000 seconds 00:37:02.617 00:37:02.617 Latency(us) 00:37:02.617 [2024-11-09T23:10:28.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.617 [2024-11-09T23:10:28.818Z] =================================================================================================================== 00:37:02.617 [2024-11-09T23:10:28.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.617 00:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3634581 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3635125 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3635125 /var/tmp/bperf.sock 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3635125 ']' 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:03.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:03.549 00:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.549 [2024-11-10 00:10:29.731077] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:03.549 [2024-11-10 00:10:29.731225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635125 ] 00:37:03.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:03.549 Zero copy mechanism will not be used. 00:37:03.807 [2024-11-10 00:10:29.866124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.807 [2024-11-10 00:10:29.993818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.740 00:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:04.740 00:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:37:04.740 00:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.740 00:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.997 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:04.997 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.997 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.997 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.997 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:04.997 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.256 nvme0n1 00:37:05.256 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:05.256 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.256 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.256 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.256 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:05.256 00:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:05.515 Zero copy mechanism will not be used. 00:37:05.515 Running I/O for 2 seconds... 00:37:05.515 [2024-11-10 00:10:31.557492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.557697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.557756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.565075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.565213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.565259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.572755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.572886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.572944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.580061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.580205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.580250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.587376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.587514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.587559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.594678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.594793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.594837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.601979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.602127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.602171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.609173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.609320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.609370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.616223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.616360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.616404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.623334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.623487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.623531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.631844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.631997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.632045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.640027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.640282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.640325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.648702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.648953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.648998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.657185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.657360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.657405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.665619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.665877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.665962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.673838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.674004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.674069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.681826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.682058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.682102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.690083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.690267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.690311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.698311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.515 [2024-11-10 00:10:31.698581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.515 [2024-11-10 00:10:31.706109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.515 [2024-11-10 00:10:31.706224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.516 [2024-11-10 00:10:31.706268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.516 [2024-11-10 00:10:31.714895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.516 [2024-11-10 00:10:31.715061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.516 [2024-11-10 00:10:31.715106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.723008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.723238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.723283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.730318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.730475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.730525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.737372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.737557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.737635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.744519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.744703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.744748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.751656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.751798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.751837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.758626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.758921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.758965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.765653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.765777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.772615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.772786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.779717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.779856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.779914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.786712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.786859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.786915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.793746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.794016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.794060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.800719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.800900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.800944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.807728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.807879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.807934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.815189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.815438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.815483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.822687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.822825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.822882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.829730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.829965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.830009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.836781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.837028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.837071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.843782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.843976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.844020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.850826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.851130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.774 [2024-11-10 00:10:31.851175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.774 [2024-11-10 00:10:31.857803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.774 [2024-11-10 00:10:31.857977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.858021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.864941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.865222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.865266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.871952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.872116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.872159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.879036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.879327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.879378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.886107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.886328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.886371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.893049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.893330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.900143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.900308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.900353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.907189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.907319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.907363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.914286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.914455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.914499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.921378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.921526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.921570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.928473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.928670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.928710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.935584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.935830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.935870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.942609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.942954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.949580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.949760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.949799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.956504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.956688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.956728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.963621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.963859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.963923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:05.775 [2024-11-10 00:10:31.970556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:05.775 [2024-11-10 00:10:31.970832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.775 [2024-11-10 00:10:31.970873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:31.977571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:31.977852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:31.977893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:31.984702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:31.984952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:31.984997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:31.991744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:31.991901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:31.991944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:31.998774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:31.998930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:31.998999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:32.005826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:32.006104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:32.006147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:32.012923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:32.013144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:32.013188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:32.020054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:32.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:32.020311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:32.027172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:32.027435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.034 [2024-11-10 00:10:32.027480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.034 [2024-11-10 00:10:32.034147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.034 [2024-11-10 00:10:32.034329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.034373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.041054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.041298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.041342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.048069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.048244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.048288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.055085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.055261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.062089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.062354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.062399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.069382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.069557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.069631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.076641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.076855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.076927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.083769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.083943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.090758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.091004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.091047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.097700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.097908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.097952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.104704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.104831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.104871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.111739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.111910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.111954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.118778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.119016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.119060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.125753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.125940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.125999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.132622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.132809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.132848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.139633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.139802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.139841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.146712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.146917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.146962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.153743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.153917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.153963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.160666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.160799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.160840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.167774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.167946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.167988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.175041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.175275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.175319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.182136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.182313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.182363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.189198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.189364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.189407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.035 [2024-11-10 00:10:32.196156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.035 [2024-11-10 00:10:32.196373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.035 [2024-11-10 00:10:32.196424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.036 [2024-11-10 00:10:32.203219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.036 [2024-11-10 00:10:32.203394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.036 [2024-11-10 00:10:32.203436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.036 [2024-11-10 00:10:32.210146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.036 [2024-11-10 00:10:32.210326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.036 [2024-11-10 00:10:32.210367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.036 [2024-11-10 00:10:32.217183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.036 [2024-11-10 00:10:32.217349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.036 [2024-11-10 00:10:32.217392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.036 [2024-11-10 00:10:32.224242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.036 [2024-11-10 00:10:32.224410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.036 [2024-11-10 00:10:32.224452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.036 [2024-11-10 00:10:32.231251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.036 [2024-11-10 00:10:32.231412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.036 [2024-11-10 00:10:32.231455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.238322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.238496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.238551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.245321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.245480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.245523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.252458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.252662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.252702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.259628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.259855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.259912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.266648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.266771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.266810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.273676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.273837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.273876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.280561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.280821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.280863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.287516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.287767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.287809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.294457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.294676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.294714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.301519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.301744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.301792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.308949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.309069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.309112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.316613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.316762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.316801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.324153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.324324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.324366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.331225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.331386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.331428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.338312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.338509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.338552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.345382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.345565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.345639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.352337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.352533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.352576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.359653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.359851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.366773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.366966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.295 [2024-11-10 00:10:32.367016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.295 [2024-11-10 00:10:32.373725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.295 [2024-11-10 00:10:32.373977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.374021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.380805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.381046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.381090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.387934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.388106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.388150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.394999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.395154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.395198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.402116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.402349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.402401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.409355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.409520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.409564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.416515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.416708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.416753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.423713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.423881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.423942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.430781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.430936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.430980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.438014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.438215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.438267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.445111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.445280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.445324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.452114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.452290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.452332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.459091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.459325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.459369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.466102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.466251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.466295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.473213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.473498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.473544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.480514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.480754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.480795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.487771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.488028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.488073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.296 [2024-11-10 00:10:32.494868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.296 [2024-11-10 00:10:32.495117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.296 [2024-11-10 00:10:32.495157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.502043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.502225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.509414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.509707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.509750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.516906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.517144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.517189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.524235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.524399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.524448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.531122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.531293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.531333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.537944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.538146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.538199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.544998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.545162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.545201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.554 4272.00 IOPS, 534.00 MiB/s [2024-11-09T23:10:32.755Z] [2024-11-10 00:10:32.553475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.553677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.553719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.560560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.560761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.560800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.567451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.567666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.567712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.574449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.574595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.574686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.581885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.582076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.582145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.589087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.589324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.589368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.596259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.596458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.596505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.603243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.554 [2024-11-10 00:10:32.603469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.554 [2024-11-10 00:10:32.603513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.554 [2024-11-10 00:10:32.610294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.610466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.610510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.617328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.617522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.617565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.624519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.624706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.624747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.631664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.631922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.631975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.638727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.638864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.638920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.645813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.645972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.652958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.653175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.653220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.660105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.660341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.660383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.667195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.667354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.667397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.674327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.674492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.674534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.681258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.681426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.681475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.688206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.688354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.688397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.695215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.695423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.702153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.702351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.702392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.709257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.709418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.709460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.716324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.716478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.716520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.723433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.723641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.723680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.730500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.730677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.730724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.737854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.738033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.738075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.744879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.745025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.745068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.555 [2024-11-10 00:10:32.752010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.555 [2024-11-10 00:10:32.752177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.555 [2024-11-10 00:10:32.752220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.758945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.759107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.766069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.766307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.766352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.773206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.773354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.773396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.780214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.780444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.780489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.787454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.787657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.787696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.794701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.794960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.795005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.801688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.801834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.801872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.808676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.808829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.808867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.815724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.815856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.815910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.822790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.822972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.823015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.830264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.830414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.830457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.837367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.837495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.837538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.844450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.844618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.844681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.851552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.851802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.851856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.858598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.858749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.858788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.865751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.865995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.866045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.814 [2024-11-10 00:10:32.872947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.814 [2024-11-10 00:10:32.873202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.814 [2024-11-10 00:10:32.873251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.880183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.880358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.880401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.887207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.887384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.887426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.894200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.894415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.901194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.901363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.901405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.908236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.908420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.908462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.915105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.915259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.915301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.922277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.922455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.922497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.929180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.929347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.929389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.936188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.936335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.936378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.943221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.943370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.943413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.950267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.950417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.950460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.957319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.957457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.957499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.964270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.964402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.964452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.971197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.971331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.971380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.978615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.978758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.978797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.985700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.985942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.985987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.992772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.992976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:32.993028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:32.999713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:32.999964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:33.000009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:33.006583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:33.006759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:33.006799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:06.815 [2024-11-10 00:10:33.013572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:06.815 [2024-11-10 00:10:33.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.815 [2024-11-10 00:10:33.013895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.020560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.020751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.020790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.027673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.027796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.027834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.034820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.034980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.035051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.042116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.042308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.042354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.049155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.049393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.049438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.056192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.056402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.056444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.063229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.063402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.063445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.070225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.070412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.070455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.077326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.077497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.077539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.084433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.084636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.084675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.091724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.091861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.091917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.098774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.098930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.098973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.105742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.105870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.105926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.112751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.112942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.112984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.119719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.119928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.119982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.126684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.126930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.126980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.133973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.134223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.134268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.141013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.141178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.141219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.148097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.148348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.148392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.155168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.075 [2024-11-10 00:10:33.155306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.075 [2024-11-10 00:10:33.155357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.075 [2024-11-10 00:10:33.162073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.162268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.162307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.168985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.169208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.169254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.176273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.176494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.176538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.183502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.183649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.183687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.190694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.190930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.190974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.197867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.198153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.198199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.205122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.205279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.205323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.212418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.212705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.219536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.219728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.219767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.226232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.226419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.226465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.232896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.233135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.233183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.239934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.240127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.240165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.248328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.248579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.248628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.255506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.255722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.262633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.262759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.262798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.076 [2024-11-10 00:10:33.269290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.076 [2024-11-10 00:10:33.269407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.076 [2024-11-10 00:10:33.269445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.275796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.275972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.276026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.282499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.282651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.282690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.289149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.289272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.289310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.295728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.295931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.295969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.302956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.303177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.303217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.309830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.310023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.310061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.316719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.316888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.316927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.323798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.323955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.323993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.331258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.331502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.331541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.338455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.338664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.338703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.345427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.345640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.345680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.352078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.352243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.352281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.358942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.359148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.359192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.365815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.366075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.366115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.372815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.372986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.373024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.379766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.379959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.379997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.386611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.386777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.386816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.393703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.393866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.393919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.336 [2024-11-10 00:10:33.400467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.336 [2024-11-10 00:10:33.400676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.336 [2024-11-10 00:10:33.400715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.407312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.407443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.407481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.414225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.414375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.414413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.421613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.421772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.421810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.429286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.429414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.429458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.436096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.436236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.436279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.443028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.443165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.443208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.450990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.451086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.451123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.457771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.457889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.457953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.464289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.464424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.464461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.471003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.471179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.471216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.477578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.477708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.477746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.485107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.485213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.485267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.491835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.491972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.492015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.498541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.498697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.498746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.505389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.505491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.505529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.513703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.513852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.513914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.521982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.522203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.528911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.529056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.529096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:07.337 [2024-11-10 00:10:33.535600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.337 [2024-11-10 00:10:33.535781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.337 [2024-11-10 00:10:33.535823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:07.596 [2024-11-10 00:10:33.542435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.596 [2024-11-10 00:10:33.542630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.596 [2024-11-10 00:10:33.542677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:07.596 4332.50 IOPS, 541.56 MiB/s [2024-11-09T23:10:33.797Z] [2024-11-10 00:10:33.550579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:37:07.596 [2024-11-10 00:10:33.550834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.596 [2024-11-10 00:10:33.550874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:07.596 00:37:07.596 Latency(us) 00:37:07.596 [2024-11-09T23:10:33.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.596 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:07.596 nvme0n1 : 2.00 4331.41 541.43 0.00 0.00 3683.24 2585.03 12815.93 00:37:07.596 [2024-11-09T23:10:33.797Z] =================================================================================================================== 00:37:07.596 [2024-11-09T23:10:33.797Z] Total : 4331.41 541.43 0.00 0.00 3683.24 2585.03 12815.93 00:37:07.596 { 00:37:07.596 "results": [ 00:37:07.596 { 00:37:07.596 "job": "nvme0n1", 00:37:07.596 "core_mask": "0x2", 00:37:07.596 "workload": "randwrite", 00:37:07.596 "status": "finished", 00:37:07.596 "queue_depth": 16, 00:37:07.596 "io_size": 131072, 00:37:07.596 "runtime": 2.00489, 00:37:07.596 "iops": 4331.409703275492, 00:37:07.596 "mibps": 541.4262129094365, 00:37:07.596 "io_failed": 0, 00:37:07.596 "io_timeout": 0, 00:37:07.596 "avg_latency_us": 3683.242243035297, 00:37:07.596 "min_latency_us": 2585.031111111111, 00:37:07.596 "max_latency_us": 12815.92888888889 00:37:07.596 } 00:37:07.596 ], 00:37:07.596 "core_count": 1 00:37:07.596 } 00:37:07.596 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:07.596 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:07.596 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:07.596 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:07.596 | .driver_specific 00:37:07.596 | .nvme_error 00:37:07.596 | .status_code 00:37:07.596 | .command_transient_transport_error' 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 280 > 0 )) 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3635125 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3635125 ']' 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3635125 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3635125 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3635125' 00:37:07.855 killing process with pid 3635125 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3635125 00:37:07.855 Received shutdown signal, test time was about 2.000000 seconds 00:37:07.855 00:37:07.855 Latency(us) 00:37:07.855 [2024-11-09T23:10:34.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.855 [2024-11-09T23:10:34.056Z] =================================================================================================================== 00:37:07.855 [2024-11-09T23:10:34.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:07.855 00:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3635125 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3633098 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3633098 ']' 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3633098 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3633098 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3633098' 00:37:08.788 killing process with pid 3633098 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3633098 00:37:08.788 00:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3633098 00:37:09.786 00:37:09.786 real 0m23.270s 00:37:09.786 user 0m45.732s 00:37:09.786 sys 0m4.800s 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:09.786 ************************************ 00:37:09.786 END TEST nvmf_digest_error 00:37:09.786 ************************************ 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.786 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.786 rmmod nvme_tcp 00:37:09.786 rmmod nvme_fabrics 00:37:09.786 rmmod nvme_keyring 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3633098 ']' 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3633098 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3633098 ']' 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3633098 00:37:10.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3633098) - No such process 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3633098 is not found' 00:37:10.045 Process with pid 3633098 is not found 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:10.045 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.046 00:10:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:11.955 00:37:11.955 real 0m52.222s 00:37:11.955 user 1m34.358s 00:37:11.955 sys 0m11.208s 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.955 ************************************ 00:37:11.955 END TEST nvmf_digest 00:37:11.955 ************************************ 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.955 ************************************ 00:37:11.955 START TEST nvmf_bdevperf 00:37:11.955 ************************************ 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:11.955 * Looking for test storage... 00:37:11.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:37:11.955 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:12.214 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.215 --rc genhtml_branch_coverage=1 00:37:12.215 --rc genhtml_function_coverage=1 00:37:12.215 --rc genhtml_legend=1 00:37:12.215 --rc geninfo_all_blocks=1 00:37:12.215 --rc geninfo_unexecuted_blocks=1 00:37:12.215 00:37:12.215 ' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.215 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:12.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.216 00:10:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:14.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:14.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:14.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:14.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.125 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.126 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.126 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.126 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.126 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:37:14.383 00:37:14.383 --- 10.0.0.2 ping statistics --- 00:37:14.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.383 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:37:14.383 00:37:14.383 --- 10.0.0.1 ping statistics --- 00:37:14.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.383 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3637767 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3637767 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3637767 ']' 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:14.383 00:10:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:14.383 [2024-11-10 00:10:40.518182] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:14.383 [2024-11-10 00:10:40.518336] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.642 [2024-11-10 00:10:40.672730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:14.642 [2024-11-10 00:10:40.798090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.642 [2024-11-10 00:10:40.798157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.642 [2024-11-10 00:10:40.798178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.642 [2024-11-10 00:10:40.798199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.642 [2024-11-10 00:10:40.798215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.642 [2024-11-10 00:10:40.800642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:14.642 [2024-11-10 00:10:40.800743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.642 [2024-11-10 00:10:40.800747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.576 [2024-11-10 00:10:41.556745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.576 Malloc0 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.576 [2024-11-10 00:10:41.678751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:15.576 { 00:37:15.576 "params": { 00:37:15.576 "name": "Nvme$subsystem", 00:37:15.576 "trtype": "$TEST_TRANSPORT", 00:37:15.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.576 "adrfam": "ipv4", 00:37:15.576 "trsvcid": "$NVMF_PORT", 00:37:15.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.576 "hdgst": ${hdgst:-false}, 00:37:15.576 "ddgst": ${ddgst:-false} 00:37:15.576 }, 00:37:15.576 "method": "bdev_nvme_attach_controller" 00:37:15.576 } 00:37:15.576 EOF 00:37:15.576 )") 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:15.576 00:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:15.576 "params": { 00:37:15.576 "name": "Nvme1", 00:37:15.576 "trtype": "tcp", 00:37:15.576 "traddr": "10.0.0.2", 00:37:15.576 "adrfam": "ipv4", 00:37:15.576 "trsvcid": "4420", 00:37:15.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:15.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:15.576 "hdgst": false, 00:37:15.576 "ddgst": false 00:37:15.576 }, 00:37:15.576 "method": "bdev_nvme_attach_controller" 00:37:15.576 }' 00:37:15.576 [2024-11-10 00:10:41.767426] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:15.576 [2024-11-10 00:10:41.767565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638022 ] 00:37:15.834 [2024-11-10 00:10:41.900917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.834 [2024-11-10 00:10:42.027726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.400 Running I/O for 1 seconds... 00:37:17.334 6135.00 IOPS, 23.96 MiB/s 00:37:17.334 Latency(us) 00:37:17.334 [2024-11-09T23:10:43.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.334 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:17.334 Verification LBA range: start 0x0 length 0x4000 00:37:17.334 Nvme1n1 : 1.01 6227.36 24.33 0.00 0.00 20466.01 1104.40 18835.53 00:37:17.334 [2024-11-09T23:10:43.535Z] =================================================================================================================== 00:37:17.334 [2024-11-09T23:10:43.535Z] Total : 6227.36 24.33 0.00 0.00 20466.01 1104.40 18835.53 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3638291 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.268 { 00:37:18.268 "params": { 00:37:18.268 "name": "Nvme$subsystem", 00:37:18.268 "trtype": "$TEST_TRANSPORT", 00:37:18.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.268 "adrfam": "ipv4", 00:37:18.268 "trsvcid": "$NVMF_PORT", 00:37:18.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.268 "hdgst": ${hdgst:-false}, 00:37:18.268 "ddgst": ${ddgst:-false} 00:37:18.268 }, 00:37:18.268 "method": "bdev_nvme_attach_controller" 00:37:18.268 } 00:37:18.268 EOF 00:37:18.268 )") 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:18.268 00:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:18.268 "params": { 00:37:18.268 "name": "Nvme1", 00:37:18.268 "trtype": "tcp", 00:37:18.268 "traddr": "10.0.0.2", 00:37:18.268 "adrfam": "ipv4", 00:37:18.268 "trsvcid": "4420", 00:37:18.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:18.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:18.268 "hdgst": false, 00:37:18.268 "ddgst": false 00:37:18.268 }, 00:37:18.268 "method": "bdev_nvme_attach_controller" 00:37:18.268 }' 00:37:18.268 [2024-11-10 00:10:44.390431] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:18.268 [2024-11-10 00:10:44.390579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638291 ] 00:37:18.526 [2024-11-10 00:10:44.523410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.526 [2024-11-10 00:10:44.650425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.091 Running I/O for 15 seconds... 00:37:21.403 6076.00 IOPS, 23.73 MiB/s [2024-11-09T23:10:47.604Z] 6160.50 IOPS, 24.06 MiB/s [2024-11-09T23:10:47.604Z] 00:10:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3637767 00:37:21.403 00:10:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:21.403 [2024-11-10 00:10:47.333336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.333946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.334042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.334093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.334144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.334195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.334246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.403 [2024-11-10 00:10:47.334689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.403 [2024-11-10 00:10:47.334784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.403 [2024-11-10 00:10:47.334808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.334830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.334855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.334893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.334917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.334957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.334985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.335963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.335990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.404 [2024-11-10 00:10:47.336558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.404 [2024-11-10 00:10:47.336584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.336971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.336998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.337968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.337992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.405 [2024-11-10 00:10:47.338325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.405 [2024-11-10 00:10:47.338349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.338955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.338980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:21.406 [2024-11-10 00:10:47.339396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.339971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.339991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.340029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.340049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.340070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.406 [2024-11-10 00:10:47.340089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.406 [2024-11-10 00:10:47.340110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.407 [2024-11-10 00:10:47.340129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.407 [2024-11-10 00:10:47.340170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.340239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:21.407 [2024-11-10 00:10:47.340258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:21.407 [2024-11-10 00:10:47.340277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102752 len:8 PRP1 0x0 PRP2 0x0 00:37:21.407 [2024-11-10 00:10:47.340296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.407 [2024-11-10 00:10:47.340749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.407 [2024-11-10 00:10:47.340795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.407 [2024-11-10 00:10:47.340836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:21.407 [2024-11-10 00:10:47.340877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:21.407 [2024-11-10 00:10:47.340915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.345162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.345227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.346071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.346118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.407 [2024-11-10 00:10:47.346147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.346436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.346746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.407 [2024-11-10 00:10:47.346792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.407 [2024-11-10 00:10:47.346819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.407 [2024-11-10 00:10:47.346843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.407 [2024-11-10 00:10:47.359980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.360430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.360473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.407 [2024-11-10 00:10:47.360500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.360796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.361091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.407 [2024-11-10 00:10:47.361124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.407 [2024-11-10 00:10:47.361146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.407 [2024-11-10 00:10:47.361168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.407 [2024-11-10 00:10:47.374602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.375101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.407 [2024-11-10 00:10:47.375169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.375472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.375783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.407 [2024-11-10 00:10:47.375812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.407 [2024-11-10 00:10:47.375833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.407 [2024-11-10 00:10:47.375853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.407 [2024-11-10 00:10:47.389176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.389682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.389725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.407 [2024-11-10 00:10:47.389752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.390050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.390339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.407 [2024-11-10 00:10:47.390370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.407 [2024-11-10 00:10:47.390394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.407 [2024-11-10 00:10:47.390416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.407 [2024-11-10 00:10:47.403678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.404128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.404180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.407 [2024-11-10 00:10:47.404206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.404491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.404801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.407 [2024-11-10 00:10:47.404834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.407 [2024-11-10 00:10:47.404861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.407 [2024-11-10 00:10:47.404883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.407 [2024-11-10 00:10:47.418203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.418693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.418744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.407 [2024-11-10 00:10:47.418771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.407 [2024-11-10 00:10:47.419053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.407 [2024-11-10 00:10:47.419338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.407 [2024-11-10 00:10:47.419369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.407 [2024-11-10 00:10:47.419392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.407 [2024-11-10 00:10:47.419414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.407 [2024-11-10 00:10:47.432739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.407 [2024-11-10 00:10:47.433213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.407 [2024-11-10 00:10:47.433265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.433297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.433600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.433888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.433919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.433943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.408 [2024-11-10 00:10:47.433965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.408 [2024-11-10 00:10:47.447237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.408 [2024-11-10 00:10:47.447723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.408 [2024-11-10 00:10:47.447772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.447796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.448098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.448382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.448415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.448438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.408 [2024-11-10 00:10:47.448460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.408 [2024-11-10 00:10:47.461709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.408 [2024-11-10 00:10:47.462153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.408 [2024-11-10 00:10:47.462202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.462228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.462511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.462808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.462841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.462875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.408 [2024-11-10 00:10:47.462897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.408 [2024-11-10 00:10:47.476181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.408 [2024-11-10 00:10:47.476659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.408 [2024-11-10 00:10:47.476712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.476738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.477021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.477311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.477343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.477366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.408 [2024-11-10 00:10:47.477387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.408 [2024-11-10 00:10:47.490657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.408 [2024-11-10 00:10:47.491106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.408 [2024-11-10 00:10:47.491156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.491183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.491465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.491772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.491814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.491837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.408 [2024-11-10 00:10:47.491859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.408 [2024-11-10 00:10:47.505165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.408 [2024-11-10 00:10:47.505642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.408 [2024-11-10 00:10:47.505693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.505719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.506005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.506289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.506321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.506344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.408 [2024-11-10 00:10:47.506366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.408 [2024-11-10 00:10:47.519602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.408 [2024-11-10 00:10:47.520105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.408 [2024-11-10 00:10:47.520156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.408 [2024-11-10 00:10:47.520182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.408 [2024-11-10 00:10:47.520465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.408 [2024-11-10 00:10:47.520768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.408 [2024-11-10 00:10:47.520806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.408 [2024-11-10 00:10:47.520838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.409 [2024-11-10 00:10:47.520860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.409 [2024-11-10 00:10:47.534083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.409 [2024-11-10 00:10:47.534547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.409 [2024-11-10 00:10:47.534607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.409 [2024-11-10 00:10:47.534636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.409 [2024-11-10 00:10:47.534918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.409 [2024-11-10 00:10:47.535202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.409 [2024-11-10 00:10:47.535233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.409 [2024-11-10 00:10:47.535255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.409 [2024-11-10 00:10:47.535278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.409 [2024-11-10 00:10:47.548448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.409 [2024-11-10 00:10:47.548917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.409 [2024-11-10 00:10:47.548967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.409 [2024-11-10 00:10:47.548993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.409 [2024-11-10 00:10:47.549275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.409 [2024-11-10 00:10:47.549559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.409 [2024-11-10 00:10:47.549601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.409 [2024-11-10 00:10:47.549627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.409 [2024-11-10 00:10:47.549667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.409 [2024-11-10 00:10:47.562869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.409 [2024-11-10 00:10:47.563349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.409 [2024-11-10 00:10:47.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.409 [2024-11-10 00:10:47.563424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.409 [2024-11-10 00:10:47.563719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.409 [2024-11-10 00:10:47.564004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.409 [2024-11-10 00:10:47.564036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.409 [2024-11-10 00:10:47.564058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.409 [2024-11-10 00:10:47.564086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.409 [2024-11-10 00:10:47.577273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.409 [2024-11-10 00:10:47.577761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.409 [2024-11-10 00:10:47.577809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.409 [2024-11-10 00:10:47.577832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.409 [2024-11-10 00:10:47.578143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.409 [2024-11-10 00:10:47.578427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.409 [2024-11-10 00:10:47.578459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.409 [2024-11-10 00:10:47.578482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.409 [2024-11-10 00:10:47.578504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.409 [2024-11-10 00:10:47.591729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.409 [2024-11-10 00:10:47.592244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.409 [2024-11-10 00:10:47.592291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.409 [2024-11-10 00:10:47.592315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.409 [2024-11-10 00:10:47.592629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.409 [2024-11-10 00:10:47.592937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.409 [2024-11-10 00:10:47.592970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.409 [2024-11-10 00:10:47.592998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.409 [2024-11-10 00:10:47.593024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.606438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.606945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.606988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.607023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.607306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.607630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.607663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.607688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.607712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.620972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.621444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.621494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.621520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.621817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.622099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.622130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.622153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.622175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.635381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.635864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.635916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.635942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.636224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.636509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.636540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.636563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.636594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.649955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.650400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.650451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.650477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.650773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.651056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.651088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.651112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.651134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.664491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.664949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.664999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.665030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.665313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.665605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.665637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.665659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.665682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.679024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.679522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.679598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.679627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.679911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.680194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.680226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.680248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.680270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.693599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.694054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.694104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.694145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.694423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.694733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.694760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.694779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.694798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.708173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.708643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.708690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.668 [2024-11-10 00:10:47.708714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.668 [2024-11-10 00:10:47.709009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.668 [2024-11-10 00:10:47.709298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.668 [2024-11-10 00:10:47.709330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.668 [2024-11-10 00:10:47.709353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.668 [2024-11-10 00:10:47.709376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.668 [2024-11-10 00:10:47.722523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.668 [2024-11-10 00:10:47.723018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.668 [2024-11-10 00:10:47.723090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.723117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.723398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.723707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.723735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.723756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.723775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.737060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.737539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.737601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.737628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.737934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.738217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.738248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.738271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.738293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.751407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.751902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.751946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.751968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.752257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.752541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.752572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.752613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.752637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.765986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.766450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.766498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.766524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.766817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.767101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.767132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.767155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.767177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.780635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.781080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.781129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.781155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.781438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.781735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.781768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.781791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.781814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.795158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.795664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.795714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.795741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.796022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.796304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.796335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.796359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.796381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.809732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.810189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.810237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.810263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.810544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.810837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.810870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.810893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.810916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.824266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.824753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.824804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.824830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.825113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.825396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.825427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.825450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.825472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.669 [2024-11-10 00:10:47.838626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.669 [2024-11-10 00:10:47.839144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.669 [2024-11-10 00:10:47.839191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.669 [2024-11-10 00:10:47.839215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.669 [2024-11-10 00:10:47.839519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.669 [2024-11-10 00:10:47.839816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.669 [2024-11-10 00:10:47.839848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.669 [2024-11-10 00:10:47.839871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.669 [2024-11-10 00:10:47.839893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.670 [2024-11-10 00:10:47.853007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.670 [2024-11-10 00:10:47.853455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.670 [2024-11-10 00:10:47.853515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.670 [2024-11-10 00:10:47.853553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.670 [2024-11-10 00:10:47.853846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.670 [2024-11-10 00:10:47.854138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.670 [2024-11-10 00:10:47.854171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.670 [2024-11-10 00:10:47.854194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.670 [2024-11-10 00:10:47.854228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.670 [2024-11-10 00:10:47.867511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.928 [2024-11-10 00:10:47.867970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.928 [2024-11-10 00:10:47.868019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.928 [2024-11-10 00:10:47.868052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.928 [2024-11-10 00:10:47.868331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.928 [2024-11-10 00:10:47.868624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.928 [2024-11-10 00:10:47.868656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.928 [2024-11-10 00:10:47.868679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.928 [2024-11-10 00:10:47.868702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.928 [2024-11-10 00:10:47.882069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.928 [2024-11-10 00:10:47.882533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.928 [2024-11-10 00:10:47.882578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.928 [2024-11-10 00:10:47.882627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.928 [2024-11-10 00:10:47.882940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.928 [2024-11-10 00:10:47.883223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.928 [2024-11-10 00:10:47.883254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.928 [2024-11-10 00:10:47.883277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.928 [2024-11-10 00:10:47.883300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.928 [2024-11-10 00:10:47.896416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.928 [2024-11-10 00:10:47.896868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.928 [2024-11-10 00:10:47.896930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.928 [2024-11-10 00:10:47.896953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.897244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.897528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.897560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.897583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.897621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.910987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.911537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.911582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.911615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.911911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.912194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.912225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.912249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.912271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.925371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.925838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.925887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.925913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.926193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.926475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.926507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.926530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.926552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.939934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.940390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.940441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.940467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.940762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.941045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.941082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.941106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.941128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.954472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.954927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.954976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.955002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.955285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.955569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.955612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.955636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.955658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.968828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.969273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.969364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.969659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.969942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.969974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.969997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.970019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.983344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.983827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.983877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.983903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.984183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.984466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.984498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.984526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.984549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:47.997906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:47.998328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:47.998369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:47.998397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:47.998692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:47.998976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:47.999007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:47.999031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:47.999053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:48.012421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.929 [2024-11-10 00:10:48.012966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.929 [2024-11-10 00:10:48.013013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.929 [2024-11-10 00:10:48.013036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.929 [2024-11-10 00:10:48.013336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.929 [2024-11-10 00:10:48.013633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.929 [2024-11-10 00:10:48.013666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.929 [2024-11-10 00:10:48.013688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.929 [2024-11-10 00:10:48.013711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.929 [2024-11-10 00:10:48.026835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.027366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.027389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.027683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.027967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.027998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.028021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.028042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.041185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.041634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.041685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.041712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.041993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.042276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.042307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.042330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.042352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.055699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.056161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.056212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.056238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.056518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.056823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.056856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.056879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.056902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.070238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.070697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.070743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.070766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.071055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.071337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.071369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.071392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.071415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.084781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.085253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.085299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.085328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.085635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.085918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.085950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.085972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.085994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.099346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.099822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.099872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.099898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.100179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.100462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.100493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.100515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.100538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.113884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:21.930 [2024-11-10 00:10:48.114341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.930 [2024-11-10 00:10:48.114390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:21.930 [2024-11-10 00:10:48.114416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:21.930 [2024-11-10 00:10:48.114710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:21.930 [2024-11-10 00:10:48.114994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:21.930 [2024-11-10 00:10:48.115026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:21.930 [2024-11-10 00:10:48.115059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:21.930 [2024-11-10 00:10:48.115084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:21.930 [2024-11-10 00:10:48.128401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.189 [2024-11-10 00:10:48.128823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.189 [2024-11-10 00:10:48.128865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.189 [2024-11-10 00:10:48.128891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.189 [2024-11-10 00:10:48.129179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.189 [2024-11-10 00:10:48.129463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.189 [2024-11-10 00:10:48.129496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.189 [2024-11-10 00:10:48.129519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.189 [2024-11-10 00:10:48.129541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.189 [2024-11-10 00:10:48.142935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.189 [2024-11-10 00:10:48.143403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.189 [2024-11-10 00:10:48.143445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.189 [2024-11-10 00:10:48.143472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.189 [2024-11-10 00:10:48.143765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.189 [2024-11-10 00:10:48.144049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.189 [2024-11-10 00:10:48.144082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.189 [2024-11-10 00:10:48.144106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.189 [2024-11-10 00:10:48.144128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.189 [2024-11-10 00:10:48.157482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.189 [2024-11-10 00:10:48.157949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.189 [2024-11-10 00:10:48.157991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.189 [2024-11-10 00:10:48.158017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.189 [2024-11-10 00:10:48.158298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.189 [2024-11-10 00:10:48.158580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.189 [2024-11-10 00:10:48.158634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.189 [2024-11-10 00:10:48.158657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.189 [2024-11-10 00:10:48.158680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.189 [2024-11-10 00:10:48.172033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.189 [2024-11-10 00:10:48.172467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.189 [2024-11-10 00:10:48.172510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.189 [2024-11-10 00:10:48.172537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.189 [2024-11-10 00:10:48.172832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.189 [2024-11-10 00:10:48.173132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.173170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.173193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.173216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.186573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.187018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.187060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.187086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.187367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.187662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.187695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.187719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.187742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.201080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.201539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.201577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.201611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.201915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.202199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.202232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.202255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.202277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.215651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.216136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.216173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.216196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.216485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.216783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.216816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.216839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.216868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.230245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.230693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.230736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.230763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.231047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.231330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.231362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.231386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.231408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.244798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 4244.67 IOPS, 16.58 MiB/s [2024-11-09T23:10:48.391Z] [2024-11-10 00:10:48.247142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.247184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.247211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.247492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.247789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.247823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.247846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.247869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.259335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.259819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.259862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.259889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.260171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.260452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.260485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.260508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.260530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.273919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.274394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.274430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.274452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.274754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.275036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.275068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.275091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.275113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.288481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.288961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.289002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.190 [2024-11-10 00:10:48.289028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.190 [2024-11-10 00:10:48.289309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.190 [2024-11-10 00:10:48.289604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.190 [2024-11-10 00:10:48.289636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.190 [2024-11-10 00:10:48.289660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.190 [2024-11-10 00:10:48.289683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.190 [2024-11-10 00:10:48.303049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.190 [2024-11-10 00:10:48.303507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.190 [2024-11-10 00:10:48.303567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.191 [2024-11-10 00:10:48.303606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.191 [2024-11-10 00:10:48.303891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.191 [2024-11-10 00:10:48.304173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.191 [2024-11-10 00:10:48.304206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.191 [2024-11-10 00:10:48.304229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.191 [2024-11-10 00:10:48.304252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.191 [2024-11-10 00:10:48.317634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.191 [2024-11-10 00:10:48.318079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.191 [2024-11-10 00:10:48.318121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.191 [2024-11-10 00:10:48.318154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.191 [2024-11-10 00:10:48.318436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.191 [2024-11-10 00:10:48.318736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.191 [2024-11-10 00:10:48.318769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.191 [2024-11-10 00:10:48.318792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.191 [2024-11-10 00:10:48.318814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.191 [2024-11-10 00:10:48.332174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.191 [2024-11-10 00:10:48.332612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.191 [2024-11-10 00:10:48.332654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.191 [2024-11-10 00:10:48.332680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.191 [2024-11-10 00:10:48.332962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.191 [2024-11-10 00:10:48.333245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.191 [2024-11-10 00:10:48.333277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.191 [2024-11-10 00:10:48.333300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.191 [2024-11-10 00:10:48.333323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.191 [2024-11-10 00:10:48.346738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.191 [2024-11-10 00:10:48.347201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.191 [2024-11-10 00:10:48.347242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.191 [2024-11-10 00:10:48.347269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.191 [2024-11-10 00:10:48.347553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.191 [2024-11-10 00:10:48.347847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.191 [2024-11-10 00:10:48.347880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.191 [2024-11-10 00:10:48.347904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.191 [2024-11-10 00:10:48.347927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.191 [2024-11-10 00:10:48.361258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.191 [2024-11-10 00:10:48.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.191 [2024-11-10 00:10:48.361743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.191 [2024-11-10 00:10:48.361770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.191 [2024-11-10 00:10:48.362051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.191 [2024-11-10 00:10:48.362342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.191 [2024-11-10 00:10:48.362374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.191 [2024-11-10 00:10:48.362397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.191 [2024-11-10 00:10:48.362420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.191 [2024-11-10 00:10:48.375763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.191 [2024-11-10 00:10:48.376243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.191 [2024-11-10 00:10:48.376285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.191 [2024-11-10 00:10:48.376311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.191 [2024-11-10 00:10:48.376605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.191 [2024-11-10 00:10:48.376890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.191 [2024-11-10 00:10:48.376922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.191 [2024-11-10 00:10:48.376985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.191 [2024-11-10 00:10:48.377010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.390157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.390650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.390676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.390959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.391243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.391275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.391298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.391320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.404715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.405176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.405217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.405243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.405526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.405823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.405855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.405885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.405908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.419092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.419512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.419554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.419597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.419883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.420166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.420198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.420221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.420243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.433624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.434096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.434139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.434165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.434446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.434745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.434777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.434800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.434822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.448037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.448497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.448539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.448564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.448856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.449142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.449173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.449196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.449223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.462631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.463137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.463177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.463203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.463485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.463781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.463814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.463837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.463858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.476995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.477438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.477480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.477505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.477798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.478080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.478111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.478134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.451 [2024-11-10 00:10:48.478156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.451 [2024-11-10 00:10:48.491545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.451 [2024-11-10 00:10:48.491992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.451 [2024-11-10 00:10:48.492034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.451 [2024-11-10 00:10:48.492061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.451 [2024-11-10 00:10:48.492344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.451 [2024-11-10 00:10:48.492644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.451 [2024-11-10 00:10:48.492677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.451 [2024-11-10 00:10:48.492699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.492722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.506091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.506543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.506584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.506623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.506907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.507191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.507222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.507245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.507267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.520681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.521161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.521222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.521248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.521532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.521826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.521858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.521881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.521903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.535124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.535569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.535619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.535648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.535931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.536242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.536275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.536299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.536321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.549517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.550033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.550118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.550405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.550702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.550734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.550758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.550780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.564001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.564447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.564488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.564514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.564808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.565102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.565134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.565157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.565179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.578584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.579040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.579082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.579108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.579391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.579688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.579720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.579743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.579766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.593170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.593618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.593659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.593686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.593969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.594260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.594291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.594314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.594336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.607704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.608163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.608205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.608231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.608512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.452 [2024-11-10 00:10:48.608816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.452 [2024-11-10 00:10:48.608849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.452 [2024-11-10 00:10:48.608872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.452 [2024-11-10 00:10:48.608895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.452 [2024-11-10 00:10:48.622296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.452 [2024-11-10 00:10:48.622779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.452 [2024-11-10 00:10:48.622821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.452 [2024-11-10 00:10:48.622847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.452 [2024-11-10 00:10:48.623129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.453 [2024-11-10 00:10:48.623411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.453 [2024-11-10 00:10:48.623443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.453 [2024-11-10 00:10:48.623467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.453 [2024-11-10 00:10:48.623490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.453 [2024-11-10 00:10:48.636930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.453 [2024-11-10 00:10:48.637393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.453 [2024-11-10 00:10:48.637434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.453 [2024-11-10 00:10:48.637461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.453 [2024-11-10 00:10:48.637754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.453 [2024-11-10 00:10:48.638038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.453 [2024-11-10 00:10:48.638071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.453 [2024-11-10 00:10:48.638102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.453 [2024-11-10 00:10:48.638127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.712 [2024-11-10 00:10:48.651506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.712 [2024-11-10 00:10:48.651944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.712 [2024-11-10 00:10:48.651986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.652013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.652295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.652577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.652621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.652647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.652670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.666059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.666480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.666522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.666548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.666843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.667125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.667157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.667179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.667202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.680562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.681003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.681045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.681071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.681352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.681649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.681683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.681707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.681730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.695110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.695633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.695692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.695719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.696001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.696283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.696314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.696338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.696360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.709546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.709985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.710027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.710053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.710337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.710638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.710670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.710699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.710721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.724170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.724651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.724693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.724728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.725010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.725295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.725327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.725349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.725372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.738926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.739402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.739450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.739478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.739771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.740062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.740095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.740118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.740141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.753297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.753770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.753812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.713 [2024-11-10 00:10:48.753839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.713 [2024-11-10 00:10:48.754121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.713 [2024-11-10 00:10:48.754403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.713 [2024-11-10 00:10:48.754436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.713 [2024-11-10 00:10:48.754459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.713 [2024-11-10 00:10:48.754482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.713 [2024-11-10 00:10:48.767894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.713 [2024-11-10 00:10:48.768344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.713 [2024-11-10 00:10:48.768385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.768411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.768707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.768989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.769022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.769046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.769068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.782475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.782932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.782975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.783002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.783291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.783576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.783622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.783646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.783669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.797048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.797519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.797576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.797616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.797901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.798184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.798215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.798238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.798260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.811643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.812066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.812107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.812133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.812414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.812711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.812744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.812767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.812790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.826191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.826643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.826685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.826711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.826994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.827276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.827314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.827338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.827360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.840800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.841282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.841323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.841349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.841647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.841929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.841961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.841985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.842008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.855380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.855841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.855883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.855910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.856191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.856473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.856505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.856528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.856550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.869914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.870371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.870413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.870439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.870735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.714 [2024-11-10 00:10:48.871018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.714 [2024-11-10 00:10:48.871051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.714 [2024-11-10 00:10:48.871073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.714 [2024-11-10 00:10:48.871101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.714 [2024-11-10 00:10:48.884465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.714 [2024-11-10 00:10:48.884928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.714 [2024-11-10 00:10:48.884971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.714 [2024-11-10 00:10:48.884997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.714 [2024-11-10 00:10:48.885279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.715 [2024-11-10 00:10:48.885561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.715 [2024-11-10 00:10:48.885607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.715 [2024-11-10 00:10:48.885632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.715 [2024-11-10 00:10:48.885654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.715 [2024-11-10 00:10:48.899014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.715 [2024-11-10 00:10:48.899471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.715 [2024-11-10 00:10:48.899512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.715 [2024-11-10 00:10:48.899538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.715 [2024-11-10 00:10:48.899834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.715 [2024-11-10 00:10:48.900117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.715 [2024-11-10 00:10:48.900149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.715 [2024-11-10 00:10:48.900174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.715 [2024-11-10 00:10:48.900197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.974 [2024-11-10 00:10:48.913501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.974 [2024-11-10 00:10:48.913976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.974 [2024-11-10 00:10:48.914018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.974 [2024-11-10 00:10:48.914046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.974 [2024-11-10 00:10:48.914334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.974 [2024-11-10 00:10:48.914645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.974 [2024-11-10 00:10:48.914683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.974 [2024-11-10 00:10:48.914707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.974 [2024-11-10 00:10:48.914732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.974 [2024-11-10 00:10:48.927910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.974 [2024-11-10 00:10:48.928374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.974 [2024-11-10 00:10:48.928416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.974 [2024-11-10 00:10:48.928443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.974 [2024-11-10 00:10:48.928737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.974 [2024-11-10 00:10:48.929020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.974 [2024-11-10 00:10:48.929051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.974 [2024-11-10 00:10:48.929074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.974 [2024-11-10 00:10:48.929097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.974 [2024-11-10 00:10:48.942487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:48.942944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:48.942986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:48.943012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:48.943293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:48.943575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:48.943620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:48.943645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:48.943668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:48.957046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:48.957519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:48.957561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:48.957599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:48.957885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:48.958168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:48.958202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:48.958225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:48.958248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:48.971602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:48.972020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:48.972062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:48.972094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:48.972376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:48.972674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:48.972708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:48.972733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:48.972755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:48.986098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:48.986567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:48.986618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:48.986646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:48.986928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:48.987210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:48.987241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:48.987264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:48.987286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.000644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.001088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.001156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.001453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.001752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.001785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.001809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.001832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.015187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.015644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.015686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.015713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.016000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.016283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.016315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.016338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.016360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.029734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.030169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.030210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.030236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.030518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.030815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.030849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.030872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.030895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.044279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.044750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.044793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.044819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.045100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.045381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.045413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.045437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.045460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.058808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.059259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.059301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.059328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.059623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.059904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.059945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.059969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.059993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.073353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.073799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.073841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.073867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.074149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.074431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.074463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.074486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.074508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.087872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.088328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.088370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.088395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.088691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.088974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.089006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.089029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.089052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.102384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.102854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.102895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.102921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.103202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.103484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.103517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.103540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.103569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.116925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.117355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.117397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.117422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.117717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.117998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.118031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.118054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.118077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.131406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.131837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.131879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.131905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.132185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.132468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.132500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.132522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.132544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.145965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.146434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.146475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.146501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.146795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.975 [2024-11-10 00:10:49.147081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.975 [2024-11-10 00:10:49.147113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.975 [2024-11-10 00:10:49.147135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.975 [2024-11-10 00:10:49.147157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:22.975 [2024-11-10 00:10:49.160564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:22.975 [2024-11-10 00:10:49.161050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.975 [2024-11-10 00:10:49.161092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:22.975 [2024-11-10 00:10:49.161118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:22.975 [2024-11-10 00:10:49.161419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:22.976 [2024-11-10 00:10:49.161722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:22.976 [2024-11-10 00:10:49.161760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:22.976 [2024-11-10 00:10:49.161784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:22.976 [2024-11-10 00:10:49.161806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.175144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.175609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.175676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.175703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.175984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.176268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.176300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.176322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.176344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.189542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.190058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.190100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.190126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.190408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.190704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.190737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.190761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.190784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.204030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.204493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.204534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.204565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.204864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.205150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.205195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.205218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.205241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.218503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.218952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.219020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.219312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.219613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.219646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.219670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.219692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.232882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.233337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.233379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.233405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.233704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.233986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.234019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.234041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.234064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.247305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 3183.50 IOPS, 12.44 MiB/s [2024-11-09T23:10:49.437Z] [2024-11-10 00:10:49.249638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.249682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.249709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.249994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.250284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.250316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.250340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.250364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.261901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.262382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.262424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.262452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.262747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.263032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.263064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.263087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.263110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.236 [2024-11-10 00:10:49.276289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.236 [2024-11-10 00:10:49.276723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.236 [2024-11-10 00:10:49.276766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.236 [2024-11-10 00:10:49.276792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.236 [2024-11-10 00:10:49.277078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.236 [2024-11-10 00:10:49.277362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.236 [2024-11-10 00:10:49.277395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.236 [2024-11-10 00:10:49.277418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.236 [2024-11-10 00:10:49.277440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.290905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.291363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.291405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.291431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.291726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.292008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.292040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.292074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.292097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.305328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.305806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.305847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.305873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.306153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.306437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.306471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.306494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.306517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.319807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.320258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.320300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.320326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.320626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.320911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.320944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.320968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.320991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.334287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.334768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.334809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.334835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.335122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.335409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.335450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.335473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.335496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.348755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.349208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.349250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.349277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.349561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.349859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.349892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.349916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.349939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.363181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.363670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.363714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.363741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.364027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.364311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.364344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.364367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.364390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.377606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.378046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.378088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.378114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.378396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.378692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.378726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.378750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.378772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.392256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.392695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.392741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.392768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.393062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.393359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.237 [2024-11-10 00:10:49.393392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.237 [2024-11-10 00:10:49.393415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.237 [2024-11-10 00:10:49.393438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.237 [2024-11-10 00:10:49.406717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.237 [2024-11-10 00:10:49.407167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.237 [2024-11-10 00:10:49.407208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.237 [2024-11-10 00:10:49.407233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.237 [2024-11-10 00:10:49.407516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.237 [2024-11-10 00:10:49.407810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.238 [2024-11-10 00:10:49.407843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.238 [2024-11-10 00:10:49.407866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.238 [2024-11-10 00:10:49.407904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.238 [2024-11-10 00:10:49.421174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.238 [2024-11-10 00:10:49.421636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.238 [2024-11-10 00:10:49.421679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.238 [2024-11-10 00:10:49.421706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.238 [2024-11-10 00:10:49.421988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.238 [2024-11-10 00:10:49.422275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.238 [2024-11-10 00:10:49.422308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.238 [2024-11-10 00:10:49.422333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.238 [2024-11-10 00:10:49.422357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.238 [2024-11-10 00:10:49.435701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.436153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.497 [2024-11-10 00:10:49.436198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.497 [2024-11-10 00:10:49.436224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.497 [2024-11-10 00:10:49.436524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.497 [2024-11-10 00:10:49.436834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.497 [2024-11-10 00:10:49.436869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.497 [2024-11-10 00:10:49.436893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.497 [2024-11-10 00:10:49.436916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.497 [2024-11-10 00:10:49.450235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.450686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.497 [2024-11-10 00:10:49.450729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.497 [2024-11-10 00:10:49.450756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.497 [2024-11-10 00:10:49.451040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.497 [2024-11-10 00:10:49.451327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.497 [2024-11-10 00:10:49.451361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.497 [2024-11-10 00:10:49.451384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.497 [2024-11-10 00:10:49.451407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.497 [2024-11-10 00:10:49.464656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.465107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.497 [2024-11-10 00:10:49.465148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.497 [2024-11-10 00:10:49.465175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.497 [2024-11-10 00:10:49.465458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.497 [2024-11-10 00:10:49.465756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.497 [2024-11-10 00:10:49.465788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.497 [2024-11-10 00:10:49.465812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.497 [2024-11-10 00:10:49.465835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.497 [2024-11-10 00:10:49.479064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.479516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.497 [2024-11-10 00:10:49.479558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.497 [2024-11-10 00:10:49.479584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.497 [2024-11-10 00:10:49.479879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.497 [2024-11-10 00:10:49.480171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.497 [2024-11-10 00:10:49.480202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.497 [2024-11-10 00:10:49.480224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.497 [2024-11-10 00:10:49.480246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.497 [2024-11-10 00:10:49.493476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.493957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.497 [2024-11-10 00:10:49.493999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.497 [2024-11-10 00:10:49.494025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.497 [2024-11-10 00:10:49.494308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.497 [2024-11-10 00:10:49.494604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.497 [2024-11-10 00:10:49.494637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.497 [2024-11-10 00:10:49.494660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.497 [2024-11-10 00:10:49.494682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.497 [2024-11-10 00:10:49.508131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.508553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.497 [2024-11-10 00:10:49.508604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.497 [2024-11-10 00:10:49.508633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.497 [2024-11-10 00:10:49.508916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.497 [2024-11-10 00:10:49.509200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.497 [2024-11-10 00:10:49.509234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.497 [2024-11-10 00:10:49.509257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.497 [2024-11-10 00:10:49.509280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.497 [2024-11-10 00:10:49.522745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.497 [2024-11-10 00:10:49.523209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.523251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.523277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.523558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.523856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.523890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.523921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.523945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.537192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.537651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.537694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.537723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.538016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.538302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.538334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.538357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.538380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.551714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.552166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.552207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.552234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.552518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.552816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.552849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.552872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.552894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.566157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.566630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.566673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.566699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.566984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.567272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.567303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.567326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.567348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.580645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.581097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.581139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.581166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.581449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.581747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.581779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.581802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.581824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.595082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.595551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.595599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.595627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.595911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.596195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.596227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.596250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.596272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.609485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.609928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.609970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.609996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.610280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.610564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.610605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.610631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.610653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.624115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.624577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.624631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.624659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.624942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.625227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.625259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.625282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.498 [2024-11-10 00:10:49.625304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.498 [2024-11-10 00:10:49.638509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.498 [2024-11-10 00:10:49.638978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.498 [2024-11-10 00:10:49.639020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.498 [2024-11-10 00:10:49.639045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.498 [2024-11-10 00:10:49.639329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.498 [2024-11-10 00:10:49.639627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.498 [2024-11-10 00:10:49.639660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.498 [2024-11-10 00:10:49.639683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.499 [2024-11-10 00:10:49.639705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.499 [2024-11-10 00:10:49.652962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.499 [2024-11-10 00:10:49.653413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.499 [2024-11-10 00:10:49.653455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.499 [2024-11-10 00:10:49.653481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.499 [2024-11-10 00:10:49.653775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.499 [2024-11-10 00:10:49.654061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.499 [2024-11-10 00:10:49.654093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.499 [2024-11-10 00:10:49.654116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.499 [2024-11-10 00:10:49.654138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.499 [2024-11-10 00:10:49.666848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.499 [2024-11-10 00:10:49.667276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.499 [2024-11-10 00:10:49.667312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.499 [2024-11-10 00:10:49.667336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.499 [2024-11-10 00:10:49.667648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.499 [2024-11-10 00:10:49.667891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.499 [2024-11-10 00:10:49.667932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.499 [2024-11-10 00:10:49.667952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.499 [2024-11-10 00:10:49.667970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.499 [2024-11-10 00:10:49.680614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.499 [2024-11-10 00:10:49.681121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.499 [2024-11-10 00:10:49.681158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.499 [2024-11-10 00:10:49.681182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.499 [2024-11-10 00:10:49.681478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.499 [2024-11-10 00:10:49.681751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.499 [2024-11-10 00:10:49.681795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.499 [2024-11-10 00:10:49.681816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.499 [2024-11-10 00:10:49.681841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.499 [2024-11-10 00:10:49.694745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.499 [2024-11-10 00:10:49.695195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.499 [2024-11-10 00:10:49.695233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.499 [2024-11-10 00:10:49.695257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.499 [2024-11-10 00:10:49.695544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.499 [2024-11-10 00:10:49.695842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.499 [2024-11-10 00:10:49.695886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.499 [2024-11-10 00:10:49.695906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.499 [2024-11-10 00:10:49.695924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.758 [2024-11-10 00:10:49.708792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.758 [2024-11-10 00:10:49.709315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.758 [2024-11-10 00:10:49.709352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.758 [2024-11-10 00:10:49.709375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.758 [2024-11-10 00:10:49.709686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.758 [2024-11-10 00:10:49.709951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.758 [2024-11-10 00:10:49.709982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.758 [2024-11-10 00:10:49.710001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.758 [2024-11-10 00:10:49.710020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.758 [2024-11-10 00:10:49.722890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.758 [2024-11-10 00:10:49.723304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.758 [2024-11-10 00:10:49.723342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.758 [2024-11-10 00:10:49.723366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.758 [2024-11-10 00:10:49.723667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.758 [2024-11-10 00:10:49.723942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.758 [2024-11-10 00:10:49.723985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.758 [2024-11-10 00:10:49.724006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.758 [2024-11-10 00:10:49.724025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.758 [2024-11-10 00:10:49.736918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.758 [2024-11-10 00:10:49.737351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.758 [2024-11-10 00:10:49.737386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.758 [2024-11-10 00:10:49.737408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.758 [2024-11-10 00:10:49.737680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.758 [2024-11-10 00:10:49.737962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.758 [2024-11-10 00:10:49.737988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.758 [2024-11-10 00:10:49.738006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.758 [2024-11-10 00:10:49.738024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.758 [2024-11-10 00:10:49.750801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.758 [2024-11-10 00:10:49.751313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.758 [2024-11-10 00:10:49.751350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.758 [2024-11-10 00:10:49.751374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.751700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.751971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.751997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.752016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.752038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.764848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.765299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.765336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.765359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.765659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.765903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.765946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.765965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.765983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.778724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.779170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.779209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.779233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.779528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.779783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.779811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.779830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.779849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.792614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.793080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.793127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.793151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.793447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.793722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.793750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.793770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.793788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.806317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.806818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.806855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.806878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.807175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.807403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.807429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.807448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.807466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.820082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.820481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.820518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.820558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.820886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.821129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.821156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.821175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.821193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.833760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.834252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.834290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.834314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.834618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.834869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.834895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.834913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.834946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.847482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.847917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.847954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.847984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.848283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.848525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.848552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.848571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.848612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.861298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.759 [2024-11-10 00:10:49.861748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.759 [2024-11-10 00:10:49.861786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.759 [2024-11-10 00:10:49.861809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.759 [2024-11-10 00:10:49.862097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.759 [2024-11-10 00:10:49.862323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.759 [2024-11-10 00:10:49.862350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.759 [2024-11-10 00:10:49.862369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.759 [2024-11-10 00:10:49.862388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.759 [2024-11-10 00:10:49.875103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.760 [2024-11-10 00:10:49.875525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.760 [2024-11-10 00:10:49.875563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.760 [2024-11-10 00:10:49.875613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.760 [2024-11-10 00:10:49.875925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.760 [2024-11-10 00:10:49.876155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.760 [2024-11-10 00:10:49.876181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.760 [2024-11-10 00:10:49.876200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.760 [2024-11-10 00:10:49.876219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.760 [2024-11-10 00:10:49.888976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.760 [2024-11-10 00:10:49.889440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.760 [2024-11-10 00:10:49.889476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.760 [2024-11-10 00:10:49.889499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.760 [2024-11-10 00:10:49.889786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.760 [2024-11-10 00:10:49.890056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.760 [2024-11-10 00:10:49.890083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.760 [2024-11-10 00:10:49.890102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.760 [2024-11-10 00:10:49.890120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.760 [2024-11-10 00:10:49.902659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.760 [2024-11-10 00:10:49.903154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.760 [2024-11-10 00:10:49.903191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.760 [2024-11-10 00:10:49.903214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.760 [2024-11-10 00:10:49.903483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.760 [2024-11-10 00:10:49.903720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.760 [2024-11-10 00:10:49.903747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.760 [2024-11-10 00:10:49.903766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.760 [2024-11-10 00:10:49.903785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.760 [2024-11-10 00:10:49.916280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.760 [2024-11-10 00:10:49.916744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.760 [2024-11-10 00:10:49.916782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.760 [2024-11-10 00:10:49.916806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.760 [2024-11-10 00:10:49.917100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.760 [2024-11-10 00:10:49.917330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.760 [2024-11-10 00:10:49.917356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.760 [2024-11-10 00:10:49.917376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.760 [2024-11-10 00:10:49.917395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.760 [2024-11-10 00:10:49.929983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.760 [2024-11-10 00:10:49.930384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.760 [2024-11-10 00:10:49.930422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.760 [2024-11-10 00:10:49.930445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.760 [2024-11-10 00:10:49.930749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.760 [2024-11-10 00:10:49.930997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.760 [2024-11-10 00:10:49.931032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.760 [2024-11-10 00:10:49.931053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.760 [2024-11-10 00:10:49.931073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.760 [2024-11-10 00:10:49.943880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:23.760 [2024-11-10 00:10:49.944323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:23.760 [2024-11-10 00:10:49.944376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:23.760 [2024-11-10 00:10:49.944400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:23.760 [2024-11-10 00:10:49.944701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:23.760 [2024-11-10 00:10:49.944980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:23.760 [2024-11-10 00:10:49.945024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:23.760 [2024-11-10 00:10:49.945044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:23.760 [2024-11-10 00:10:49.945063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:23.760 [2024-11-10 00:10:49.958335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.019 [2024-11-10 00:10:49.958813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.019 [2024-11-10 00:10:49.958851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.019 [2024-11-10 00:10:49.958875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.019 [2024-11-10 00:10:49.959165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.019 [2024-11-10 00:10:49.959418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.019 [2024-11-10 00:10:49.959445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.019 [2024-11-10 00:10:49.959465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.019 [2024-11-10 00:10:49.959483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.019 [2024-11-10 00:10:49.972053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.019 [2024-11-10 00:10:49.972496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.019 [2024-11-10 00:10:49.972533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.019 [2024-11-10 00:10:49.972557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:49.972860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:49.973104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:49.973131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:49.973150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:49.973173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:49.985748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:49.986217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:49.986253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:49.986275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:49.986548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:49.986805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:49.986833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:49.986854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:49.986873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:49.999496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:49.999992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.000031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.000055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.000324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.000616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.000645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.000678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.000698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.013519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:50.014088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.014129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.014154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.014455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.014738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.014768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.014790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.014810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.027729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:50.028153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.028191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.028215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.028523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.028828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.028874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.028896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.028916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.042106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:50.042597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.042637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.042662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.042953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.043208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.043236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.043256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.043275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.055905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:50.056281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.056319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.056341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.056624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.056879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.056906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.056926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.056960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.069912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:50.070333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.070370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.070400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.070677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.070951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.070978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.070997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.071016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.083834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.020 [2024-11-10 00:10:50.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.020 [2024-11-10 00:10:50.084368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.020 [2024-11-10 00:10:50.084391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.020 [2024-11-10 00:10:50.084665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.020 [2024-11-10 00:10:50.084934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.020 [2024-11-10 00:10:50.084961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.020 [2024-11-10 00:10:50.084979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.020 [2024-11-10 00:10:50.084998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.020 [2024-11-10 00:10:50.097712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.098187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.098224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.098247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.098538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.098817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.098846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.098865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.098899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.111534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.111945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.111981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.112005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.112281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.112512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.112538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.112558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.112576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.125417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.125890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.125944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.125968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.126243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.126490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.126516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.126535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.126552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.139452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.139897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.139934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.139958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.140255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.140502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.140530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.140549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.140582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.153319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.153784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.153822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.153846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.154135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.154362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.154388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.154412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.154431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.167305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.167748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.167787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.167810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.168087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.168315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.168341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.168360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.168394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.181254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.181719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.181758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.181782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.182062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.182314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.182340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.182361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.182380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.195457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.195920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.195959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.195984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.196277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.196533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.021 [2024-11-10 00:10:50.196561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.021 [2024-11-10 00:10:50.196608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.021 [2024-11-10 00:10:50.196630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.021 [2024-11-10 00:10:50.209295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.021 [2024-11-10 00:10:50.209753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.021 [2024-11-10 00:10:50.209790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.021 [2024-11-10 00:10:50.209814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.021 [2024-11-10 00:10:50.210101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.021 [2024-11-10 00:10:50.210328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.022 [2024-11-10 00:10:50.210355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.022 [2024-11-10 00:10:50.210389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.022 [2024-11-10 00:10:50.210408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.281 [2024-11-10 00:10:50.223387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.281 [2024-11-10 00:10:50.223830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.281 [2024-11-10 00:10:50.223869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.281 [2024-11-10 00:10:50.223893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.281 [2024-11-10 00:10:50.224148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.281 [2024-11-10 00:10:50.224438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.281 [2024-11-10 00:10:50.224465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.281 [2024-11-10 00:10:50.224484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.281 [2024-11-10 00:10:50.224502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.281 [2024-11-10 00:10:50.237215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.281 [2024-11-10 00:10:50.237662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.281 [2024-11-10 00:10:50.237701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.281 [2024-11-10 00:10:50.237726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.281 [2024-11-10 00:10:50.238021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.281 [2024-11-10 00:10:50.238258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.281 [2024-11-10 00:10:50.238285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.281 [2024-11-10 00:10:50.238304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.281 [2024-11-10 00:10:50.238323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.281 2546.80 IOPS, 9.95 MiB/s [2024-11-09T23:10:50.482Z] [2024-11-10 00:10:50.252805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.281 [2024-11-10 00:10:50.253227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.281 [2024-11-10 00:10:50.253272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.253296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.253573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.253877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.253905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.253940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.253959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 [2024-11-10 00:10:50.266679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.267084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.267121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.267144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.267418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.267689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.267719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.267738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.267758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 [2024-11-10 00:10:50.280543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.280983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.281036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.281059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.281337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.281564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.281616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.281636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.281655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 [2024-11-10 00:10:50.294516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.294965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.295003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.295027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.295338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.295599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.295627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.295662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.295681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 [2024-11-10 00:10:50.308308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.308756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.308795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.308819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.309105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.309332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.309359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.309378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.309397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3637767 Killed "${NVMF_APP[@]}" "$@" 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3638956 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3638956 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3638956 ']' 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:24.282 00:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.282 [2024-11-10 00:10:50.322430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.322827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.322865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.322894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.323178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.323412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.323438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.323458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.323476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 [2024-11-10 00:10:50.336429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.336890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.336927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.336952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.282 [2024-11-10 00:10:50.337246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.282 [2024-11-10 00:10:50.337472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.282 [2024-11-10 00:10:50.337498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.282 [2024-11-10 00:10:50.337516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.282 [2024-11-10 00:10:50.337533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.282 [2024-11-10 00:10:50.350282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.282 [2024-11-10 00:10:50.350766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.282 [2024-11-10 00:10:50.350811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.282 [2024-11-10 00:10:50.350835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.351132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.351358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.351383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.351402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.351420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.364132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.364605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.364652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.364675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.364980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.365207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.365232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.365250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.365267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.377893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.378336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.378383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.378406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.378702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.378974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.379000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.379019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.379035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.391765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.392279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.392326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.392349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.392653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.392904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.392946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.392965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.392997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.405520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.405925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.405961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.405984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.406254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.406486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.406515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.406535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.406553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.410468] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:24.283 [2024-11-10 00:10:50.410626] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.283 [2024-11-10 00:10:50.419441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.419939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.419986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.420011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.420301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.420578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.420638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.420660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.420679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.433354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.433786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.433834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.433858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.434142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.434401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.434428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.434446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.434465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.447285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.447805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.447854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.447878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.448172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.448405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.283 [2024-11-10 00:10:50.448430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.283 [2024-11-10 00:10:50.448449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.283 [2024-11-10 00:10:50.448467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.283 [2024-11-10 00:10:50.461154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.283 [2024-11-10 00:10:50.461580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.283 [2024-11-10 00:10:50.461624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.283 [2024-11-10 00:10:50.461657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.283 [2024-11-10 00:10:50.461963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.283 [2024-11-10 00:10:50.462190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.284 [2024-11-10 00:10:50.462215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.284 [2024-11-10 00:10:50.462234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.284 [2024-11-10 00:10:50.462262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.284 [2024-11-10 00:10:50.475118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.284 [2024-11-10 00:10:50.475543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.284 [2024-11-10 00:10:50.475600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.284 [2024-11-10 00:10:50.475626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.284 [2024-11-10 00:10:50.475907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.284 [2024-11-10 00:10:50.476184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.284 [2024-11-10 00:10:50.476211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.284 [2024-11-10 00:10:50.476230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.284 [2024-11-10 00:10:50.476248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.489510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.490014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.490058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.490081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.490380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.490670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.544 [2024-11-10 00:10:50.490697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.544 [2024-11-10 00:10:50.490722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.544 [2024-11-10 00:10:50.490741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.503350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.503773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.503810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.503841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.504156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.504412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.544 [2024-11-10 00:10:50.504453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.544 [2024-11-10 00:10:50.504473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.544 [2024-11-10 00:10:50.504491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.517609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.518080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.518128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.518152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.518439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.518706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.544 [2024-11-10 00:10:50.518734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.544 [2024-11-10 00:10:50.518755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.544 [2024-11-10 00:10:50.518775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.531681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.532097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.532145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.532169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.532449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.532715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.544 [2024-11-10 00:10:50.532743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.544 [2024-11-10 00:10:50.532763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.544 [2024-11-10 00:10:50.532782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.545931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.546412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.546459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.546483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.546783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.547051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.544 [2024-11-10 00:10:50.547079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.544 [2024-11-10 00:10:50.547100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.544 [2024-11-10 00:10:50.547121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.559932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.560414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.560459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.560483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.560760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.561051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.544 [2024-11-10 00:10:50.561078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.544 [2024-11-10 00:10:50.561096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.544 [2024-11-10 00:10:50.561114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.544 [2024-11-10 00:10:50.572274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:24.544 [2024-11-10 00:10:50.573831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.544 [2024-11-10 00:10:50.574289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.544 [2024-11-10 00:10:50.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.544 [2024-11-10 00:10:50.574358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.544 [2024-11-10 00:10:50.574664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.544 [2024-11-10 00:10:50.574908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.574949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.574969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.574987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.588441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.589078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.589138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.589164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.589481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.589785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.589814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.589842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.589863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.603068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.603625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.603684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.603710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.604040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.604336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.604370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.604397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.604422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.617557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.618053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.618136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.618426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.618735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.618765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.618787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.618823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.632072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.632532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.632575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.632629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.632926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.633230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.633262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.633286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.633308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.646785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.647226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.647269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.647296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.647583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.647869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.647900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.647921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.647957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.661247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.661804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.661842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.661866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.662171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.662459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.662493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.662517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.662540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.675847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.676332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.676375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.676403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.676729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.677019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.677053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.677076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.677098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.690459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.691049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.691098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.545 [2024-11-10 00:10:50.691125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.545 [2024-11-10 00:10:50.691431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.545 [2024-11-10 00:10:50.691765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.545 [2024-11-10 00:10:50.691798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.545 [2024-11-10 00:10:50.691819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.545 [2024-11-10 00:10:50.691842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.545 [2024-11-10 00:10:50.704972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.545 [2024-11-10 00:10:50.705461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.545 [2024-11-10 00:10:50.705503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.546 [2024-11-10 00:10:50.705530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.546 [2024-11-10 00:10:50.705857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.546 [2024-11-10 00:10:50.706149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.546 [2024-11-10 00:10:50.706196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.546 [2024-11-10 00:10:50.706219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.546 [2024-11-10 00:10:50.706242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.546 [2024-11-10 00:10:50.712772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.546 [2024-11-10 00:10:50.712819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.546 [2024-11-10 00:10:50.712840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.546 [2024-11-10 00:10:50.712861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.546 [2024-11-10 00:10:50.712878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.546 [2024-11-10 00:10:50.715544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.546 [2024-11-10 00:10:50.715639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.546 [2024-11-10 00:10:50.715655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:24.546 [2024-11-10 00:10:50.719275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.546 [2024-11-10 00:10:50.719780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.546 [2024-11-10 00:10:50.719823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.546 [2024-11-10 00:10:50.719849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.546 [2024-11-10 00:10:50.720146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.546 [2024-11-10 00:10:50.720402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.546 [2024-11-10 00:10:50.720431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.546 [2024-11-10 00:10:50.720453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.546 [2024-11-10 00:10:50.720473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.546 [2024-11-10 00:10:50.733619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.546 [2024-11-10 00:10:50.734272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.546 [2024-11-10 00:10:50.734320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.546 [2024-11-10 00:10:50.734351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.546 [2024-11-10 00:10:50.734664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.546 [2024-11-10 00:10:50.734967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.546 [2024-11-10 00:10:50.734996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.546 [2024-11-10 00:10:50.735037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.546 [2024-11-10 00:10:50.735064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.804 [2024-11-10 00:10:50.747888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.804 [2024-11-10 00:10:50.748368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.804 [2024-11-10 00:10:50.748406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.748429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.748699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.748961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.748990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.749011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.749032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.762110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.762536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.762611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.762903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.763150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.763177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.763197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.763216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.776269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.776708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.776746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.776769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.777058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.777304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.777331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.777351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.777370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.790443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.790952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.790994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.791020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.791319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.791624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.791654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.791678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.791700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.804745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.805391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.805442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.805472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.805767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.806050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.806080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.806104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.806128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.818918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.819510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.819560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.819608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.819915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.820206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.820234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.820259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.820282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.833219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.833663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.833701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.833725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.834017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.834283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.834311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.834331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.834351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.847609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.848022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.848059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.848081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.848351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.848654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.848683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.848710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.848731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.861610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.862050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.862087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.862111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.862397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.862680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.862709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.862731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.862750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.875655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.876110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.876147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.876170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.876454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.805 [2024-11-10 00:10:50.876729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.805 [2024-11-10 00:10:50.876758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.805 [2024-11-10 00:10:50.876779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.805 [2024-11-10 00:10:50.876798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.805 [2024-11-10 00:10:50.889732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.805 [2024-11-10 00:10:50.890178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.805 [2024-11-10 00:10:50.890215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.805 [2024-11-10 00:10:50.890238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.805 [2024-11-10 00:10:50.890522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.890799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.890828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.890848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.890867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.903686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.904130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.904168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.904192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.904472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.904769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.904800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.904821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.904842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.917732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.918150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.918188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.918211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.918480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.918763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.918792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.918812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.918831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.931731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.932208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.932245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.932269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.932553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.932826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.932854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.932901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.932921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.945923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.946595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.946665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.946697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.947005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.947266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.947296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.947324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.947349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.960335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.960993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.961044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.961077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.961390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.961673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.961704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.961731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.961756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.974608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.975166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.975190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.975482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.975770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.975801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.975822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.975842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:50.988667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:50.989138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:50.989176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:50.989200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:50.989494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:50.989791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.806 [2024-11-10 00:10:50.989821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.806 [2024-11-10 00:10:50.989843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.806 [2024-11-10 00:10:50.989863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:24.806 [2024-11-10 00:10:51.002725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:24.806 [2024-11-10 00:10:51.003188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.806 [2024-11-10 00:10:51.003226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:24.806 [2024-11-10 00:10:51.003251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:24.806 [2024-11-10 00:10:51.003542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:24.806 [2024-11-10 00:10:51.003831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:24.807 [2024-11-10 00:10:51.003862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:24.807 [2024-11-10 00:10:51.003898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:24.807 [2024-11-10 00:10:51.003919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.066 [2024-11-10 00:10:51.016897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.066 [2024-11-10 00:10:51.017395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.066 [2024-11-10 00:10:51.017434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.066 [2024-11-10 00:10:51.017458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.066 [2024-11-10 00:10:51.017728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.066 [2024-11-10 00:10:51.018014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.066 [2024-11-10 00:10:51.018072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.066 [2024-11-10 00:10:51.018092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.066 [2024-11-10 00:10:51.018112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.066 [2024-11-10 00:10:51.030974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.066 [2024-11-10 00:10:51.031391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.066 [2024-11-10 00:10:51.031428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.066 [2024-11-10 00:10:51.031452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.066 [2024-11-10 00:10:51.031737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.066 [2024-11-10 00:10:51.032007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.066 [2024-11-10 00:10:51.032036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.066 [2024-11-10 00:10:51.032055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.066 [2024-11-10 00:10:51.032075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.066 [2024-11-10 00:10:51.045095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.066 [2024-11-10 00:10:51.045577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.066 [2024-11-10 00:10:51.045624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.066 [2024-11-10 00:10:51.045649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.066 [2024-11-10 00:10:51.045937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.066 [2024-11-10 00:10:51.046184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.066 [2024-11-10 00:10:51.046212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.066 [2024-11-10 00:10:51.046232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.066 [2024-11-10 00:10:51.046252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.066 [2024-11-10 00:10:51.059236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.066 [2024-11-10 00:10:51.059754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.066 [2024-11-10 00:10:51.059795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.066 [2024-11-10 00:10:51.059820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.066 [2024-11-10 00:10:51.060113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.066 [2024-11-10 00:10:51.060364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.066 [2024-11-10 00:10:51.060392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.066 [2024-11-10 00:10:51.060414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.066 [2024-11-10 00:10:51.060435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.066 [2024-11-10 00:10:51.073292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.066 [2024-11-10 00:10:51.073741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.066 [2024-11-10 00:10:51.073779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.066 [2024-11-10 00:10:51.073803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.066 [2024-11-10 00:10:51.074102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.066 [2024-11-10 00:10:51.074351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.066 [2024-11-10 00:10:51.074380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.066 [2024-11-10 00:10:51.074405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.066 [2024-11-10 00:10:51.074426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.066 [2024-11-10 00:10:51.087309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.066 [2024-11-10 00:10:51.087734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.066 [2024-11-10 00:10:51.087771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.066 [2024-11-10 00:10:51.087795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.066 [2024-11-10 00:10:51.088081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.066 [2024-11-10 00:10:51.088324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.066 [2024-11-10 00:10:51.088352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.088372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.088392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.101268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.101693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.101731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.101754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.102040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.102283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.102310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.102330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.102350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.115352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.115746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.115784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.115807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.116075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.116316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.116344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.116364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.116383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.129365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.129811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.129849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.129872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.130155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.130397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.130424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.130444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.130463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.143435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.143882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.143920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.143944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.144228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.144488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.144517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.144537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.144557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.157400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.157776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.157814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.157838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.158106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.158349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.158376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.158396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.158416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.171398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.171848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.171890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.171915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.172198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.172440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.172466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.172485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.172504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.185353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.185763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.185801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.185824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.186107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.186348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.186376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.186395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.186413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.199417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.199826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.199864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.199887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.067 [2024-11-10 00:10:51.200170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.067 [2024-11-10 00:10:51.200421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.067 [2024-11-10 00:10:51.200449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.067 [2024-11-10 00:10:51.200469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.067 [2024-11-10 00:10:51.200488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.067 [2024-11-10 00:10:51.213411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.067 [2024-11-10 00:10:51.213838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.067 [2024-11-10 00:10:51.213875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.067 [2024-11-10 00:10:51.213899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.068 [2024-11-10 00:10:51.214186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.068 [2024-11-10 00:10:51.214428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.068 [2024-11-10 00:10:51.214455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.068 [2024-11-10 00:10:51.214474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.068 [2024-11-10 00:10:51.214508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.068 [2024-11-10 00:10:51.227550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.068 [2024-11-10 00:10:51.227990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.068 [2024-11-10 00:10:51.228028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.068 [2024-11-10 00:10:51.228051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.068 [2024-11-10 00:10:51.228336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.068 [2024-11-10 00:10:51.228580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.068 [2024-11-10 00:10:51.228630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.068 [2024-11-10 00:10:51.228650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.068 [2024-11-10 00:10:51.228669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.068 [2024-11-10 00:10:51.241519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.068 [2024-11-10 00:10:51.241918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.068 [2024-11-10 00:10:51.241956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.068 [2024-11-10 00:10:51.241980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.068 [2024-11-10 00:10:51.242249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.068 [2024-11-10 00:10:51.242491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.068 [2024-11-10 00:10:51.242519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.068 [2024-11-10 00:10:51.242538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.068 [2024-11-10 00:10:51.242557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.068 2122.33 IOPS, 8.29 MiB/s [2024-11-09T23:10:51.269Z] [2024-11-10 00:10:51.257217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.068 [2024-11-10 00:10:51.257646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.068 [2024-11-10 00:10:51.257684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.068 [2024-11-10 00:10:51.257708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.068 [2024-11-10 00:10:51.257993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.068 [2024-11-10 00:10:51.258234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.068 [2024-11-10 00:10:51.258267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.068 [2024-11-10 00:10:51.258288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.068 [2024-11-10 00:10:51.258307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.328 [2024-11-10 00:10:51.271258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.328 [2024-11-10 00:10:51.271653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.328 [2024-11-10 00:10:51.271692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.328 [2024-11-10 00:10:51.271716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.328 [2024-11-10 00:10:51.272004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.328 [2024-11-10 00:10:51.272246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.328 [2024-11-10 00:10:51.272273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.328 [2024-11-10 00:10:51.272292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.328 [2024-11-10 00:10:51.272311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.328 [2024-11-10 00:10:51.285144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.328 [2024-11-10 00:10:51.285595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.328 [2024-11-10 00:10:51.285632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.328 [2024-11-10 00:10:51.285655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.328 [2024-11-10 00:10:51.285926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.328 [2024-11-10 00:10:51.286181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.328 [2024-11-10 00:10:51.286208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.328 [2024-11-10 00:10:51.286228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.328 [2024-11-10 00:10:51.286247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.328 [2024-11-10 00:10:51.299185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.328 [2024-11-10 00:10:51.299656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.328 [2024-11-10 00:10:51.299695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.328 [2024-11-10 00:10:51.299718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.328 [2024-11-10 00:10:51.300002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.328 [2024-11-10 00:10:51.300243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.328 [2024-11-10 00:10:51.300270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.328 [2024-11-10 00:10:51.300294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.328 [2024-11-10 00:10:51.300314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.328 [2024-11-10 00:10:51.313100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.328 [2024-11-10 00:10:51.313475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.328 [2024-11-10 00:10:51.313514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.328 [2024-11-10 00:10:51.313537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.313815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.314075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.314102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.314122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.314141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.327092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.327469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.327506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.327529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.327807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.328066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.328093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.328114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.328133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.341004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.341432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.341469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.341493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.341771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.342030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.342058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.342078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.342097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.354966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.355427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.355466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.355490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.355769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.356029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.356057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.356075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.356094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.368886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.369421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.369444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.369722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.369984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.370011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.370031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.370050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.382817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.383252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.383289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.383312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.383602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.383867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.383895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.383916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.383950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.396886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.397275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.397312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.397341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.397640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.397897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.397940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.397961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.397980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 [2024-11-10 00:10:51.410984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.411383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.411422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.411447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.329 [2024-11-10 00:10:51.411715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.329 [2024-11-10 00:10:51.411990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.329 [2024-11-10 00:10:51.412019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.329 [2024-11-10 00:10:51.412040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.329 [2024-11-10 00:10:51.412061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.329 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:25.329 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:37:25.329 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:25.329 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:25.329 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.329 [2024-11-10 00:10:51.425248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.329 [2024-11-10 00:10:51.425671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.329 [2024-11-10 00:10:51.425708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.329 [2024-11-10 00:10:51.425732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.426017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.426260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.426288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.426308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.426327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 [2024-11-10 00:10:51.439284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.439692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.439730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.439754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.440037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.440281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.440309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.440330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.440348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.330 [2024-11-10 00:10:51.448216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.330 [2024-11-10 00:10:51.453290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.453742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.453780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.453805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.454095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.454363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.454391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.454412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.454433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.330 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.330 [2024-11-10 00:10:51.467418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.467875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.467914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.467939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.468228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.468475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.468502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.468523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.468541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 [2024-11-10 00:10:51.481622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.482125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.482167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.482192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.482485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.482775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.482804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.482827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.482850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 [2024-11-10 00:10:51.495767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.496420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.496469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.496501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.496785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.497077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.497107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.497133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.497156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 [2024-11-10 00:10:51.509922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.510351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.510390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.510414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.510737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.511005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.511033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.511060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.511080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.330 [2024-11-10 00:10:51.524108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.330 [2024-11-10 00:10:51.524507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.330 [2024-11-10 00:10:51.524546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.330 [2024-11-10 00:10:51.524570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.330 [2024-11-10 00:10:51.524859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.330 [2024-11-10 00:10:51.525125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.330 [2024-11-10 00:10:51.525154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.330 [2024-11-10 00:10:51.525174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.330 [2024-11-10 00:10:51.525193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.589 [2024-11-10 00:10:51.538231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.589 [2024-11-10 00:10:51.538596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.589 [2024-11-10 00:10:51.538649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.589 [2024-11-10 00:10:51.538674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.589 [2024-11-10 00:10:51.538961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.589 [2024-11-10 00:10:51.539206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.589 [2024-11-10 00:10:51.539234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.589 [2024-11-10 00:10:51.539255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.589 [2024-11-10 00:10:51.539275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.589 Malloc0 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.589 [2024-11-10 00:10:51.552422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.589 [2024-11-10 00:10:51.552908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.589 [2024-11-10 00:10:51.552948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.589 [2024-11-10 00:10:51.552973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.589 [2024-11-10 00:10:51.553262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.589 [2024-11-10 00:10:51.553504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.589 [2024-11-10 00:10:51.553537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.589 [2024-11-10 00:10:51.553557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.589 [2024-11-10 00:10:51.553600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.589 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.589 [2024-11-10 00:10:51.566454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.589 [2024-11-10 00:10:51.566884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:25.590 [2024-11-10 00:10:51.566922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:25.590 [2024-11-10 00:10:51.566947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:25.590 [2024-11-10 00:10:51.567191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:25.590 [2024-11-10 00:10:51.567220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:25.590 [2024-11-10 00:10:51.567469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:25.590 [2024-11-10 00:10:51.567513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:25.590 [2024-11-10 00:10:51.567542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:25.590 [2024-11-10 00:10:51.567562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:25.590 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.590 00:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3638291 00:37:25.590 [2024-11-10 00:10:51.580533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:25.590 [2024-11-10 00:10:51.659151] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:27.090 2339.14 IOPS, 9.14 MiB/s [2024-11-09T23:10:54.666Z] 2815.75 IOPS, 11.00 MiB/s [2024-11-09T23:10:55.601Z] 3201.44 IOPS, 12.51 MiB/s [2024-11-09T23:10:56.537Z] 3488.40 IOPS, 13.63 MiB/s [2024-11-09T23:10:57.470Z] 3743.82 IOPS, 14.62 MiB/s [2024-11-09T23:10:58.404Z] 3947.67 IOPS, 15.42 MiB/s [2024-11-09T23:10:59.353Z] 4119.46 IOPS, 16.09 MiB/s [2024-11-09T23:11:00.287Z] 4267.64 IOPS, 16.67 MiB/s [2024-11-09T23:11:00.287Z] 4392.33 IOPS, 17.16 MiB/s 00:37:34.086 Latency(us) 00:37:34.086 [2024-11-09T23:11:00.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.086 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:34.086 Verification LBA range: start 0x0 length 0x4000 00:37:34.086 Nvme1n1 : 15.01 4390.89 17.15 9710.96 0.00 9048.36 1104.40 40195.41 00:37:34.086 [2024-11-09T23:11:00.287Z] =================================================================================================================== 00:37:34.086 [2024-11-09T23:11:00.287Z] Total : 4390.89 17.15 9710.96 0.00 9048.36 1104.40 40195.41 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.018 rmmod nvme_tcp 00:37:35.018 rmmod nvme_fabrics 00:37:35.018 rmmod nvme_keyring 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3638956 ']' 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3638956 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3638956 ']' 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3638956 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3638956 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3638956' 00:37:35.018 killing process with pid 3638956 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3638956 00:37:35.018 00:11:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3638956 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.400 00:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.300 00:11:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.560 00:37:38.560 real 0m26.417s 00:37:38.560 user 1m11.932s 00:37:38.560 sys 0m4.970s 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.560 ************************************ 00:37:38.560 END TEST nvmf_bdevperf 00:37:38.560 ************************************ 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.560 ************************************ 00:37:38.560 START TEST nvmf_target_disconnect 00:37:38.560 ************************************ 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:38.560 * Looking for test storage... 00:37:38.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.560 --rc genhtml_branch_coverage=1 00:37:38.560 --rc genhtml_function_coverage=1 00:37:38.560 --rc genhtml_legend=1 00:37:38.560 --rc geninfo_all_blocks=1 00:37:38.560 --rc geninfo_unexecuted_blocks=1 00:37:38.560 00:37:38.560 ' 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.560 --rc genhtml_branch_coverage=1 00:37:38.560 --rc genhtml_function_coverage=1 00:37:38.560 --rc genhtml_legend=1 00:37:38.560 --rc geninfo_all_blocks=1 00:37:38.560 --rc geninfo_unexecuted_blocks=1 00:37:38.560 00:37:38.560 ' 00:37:38.560 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.560 --rc genhtml_branch_coverage=1 00:37:38.560 --rc genhtml_function_coverage=1 00:37:38.560 --rc genhtml_legend=1 00:37:38.560 --rc geninfo_all_blocks=1 00:37:38.561 --rc geninfo_unexecuted_blocks=1 00:37:38.561 00:37:38.561 ' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.561 --rc genhtml_branch_coverage=1 00:37:38.561 --rc genhtml_function_coverage=1 00:37:38.561 --rc genhtml_legend=1 00:37:38.561 --rc geninfo_all_blocks=1 00:37:38.561 --rc geninfo_unexecuted_blocks=1 00:37:38.561 00:37:38.561 ' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:38.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:38.561 00:11:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.106 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:41.106 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:41.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:41.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:41.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:41.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:41.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:41.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:37:41.107 00:37:41.107 --- 10.0.0.2 ping statistics --- 00:37:41.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.107 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:41.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:41.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:37:41.107 00:37:41.107 --- 10.0.0.1 ping statistics --- 00:37:41.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.107 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:41.107 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.108 ************************************ 00:37:41.108 START TEST nvmf_target_disconnect_tc1 00:37:41.108 ************************************ 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:41.108 00:11:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:41.108 [2024-11-10 00:11:07.142839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.108 [2024-11-10 00:11:07.142957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:41.108 [2024-11-10 00:11:07.143056] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:41.108 [2024-11-10 00:11:07.143105] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:41.108 [2024-11-10 00:11:07.143132] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:41.108 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:41.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:41.108 Initializing NVMe Controllers 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:41.108 00:37:41.108 real 0m0.260s 00:37:41.108 user 0m0.117s 00:37:41.108 sys 0m0.141s 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:41.108 ************************************ 00:37:41.108 END TEST nvmf_target_disconnect_tc1 00:37:41.108 ************************************ 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:41.108 ************************************ 00:37:41.108 START TEST nvmf_target_disconnect_tc2 00:37:41.108 ************************************ 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3642373 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3642373 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3642373 ']' 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:41.108 00:11:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:41.367 [2024-11-10 00:11:07.342966] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:41.367 [2024-11-10 00:11:07.343135] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.367 [2024-11-10 00:11:07.500670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:41.630 [2024-11-10 00:11:07.625965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.631 [2024-11-10 00:11:07.626033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.631 [2024-11-10 00:11:07.626055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.631 [2024-11-10 00:11:07.626076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.631 [2024-11-10 00:11:07.626092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.631 [2024-11-10 00:11:07.628699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:41.631 [2024-11-10 00:11:07.628766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:41.631 [2024-11-10 00:11:07.628813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:41.631 [2024-11-10 00:11:07.628817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.272 Malloc0 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.272 [2024-11-10 00:11:08.416262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.272 [2024-11-10 00:11:08.446555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:42.272 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.273 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.273 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.273 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3642532 00:37:42.273 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:42.273 00:11:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:44.827 00:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3642373 00:37:44.827 00:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Read completed with error (sct=0, sc=8) 00:37:44.827 starting I/O failed 00:37:44.827 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 [2024-11-10 00:11:10.487190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 [2024-11-10 00:11:10.488121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:44.828 [2024-11-10 00:11:10.488342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.488386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.488526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.488563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.488733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.488769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.488886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.488921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.489056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.489090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.489205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.489240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.489359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.489394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Write completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 Read completed with error (sct=0, sc=8) 00:37:44.828 starting I/O failed 00:37:44.828 [2024-11-10 00:11:10.490049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.828 [2024-11-10 00:11:10.490227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.490299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.490486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.490522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.490687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.490723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.490834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.490868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.490973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.491007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.491145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.491181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.491354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.491388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.491553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.491604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.491750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.491797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.491967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.492001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.492140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.492174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.492316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.492351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.492490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.492525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.492679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.492714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.492907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.492956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.493090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.493126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.493268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.493302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.493406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.828 [2024-11-10 00:11:10.493439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.828 qpair failed and we were unable to recover it. 00:37:44.828 [2024-11-10 00:11:10.493620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.493678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.493849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.493895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.494059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.494113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.494343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.494376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.494612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.494651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.494757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.494791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.494961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.495031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.495244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.495280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.495386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.495420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.495584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.495629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.495734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.495767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.495910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.495945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.496113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.496146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.496424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.496458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.496632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.496667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.496786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.496822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.496971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.497150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.497220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.497426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.497459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.497602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.497636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.497772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.497807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.497918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.497958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.498168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.498210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.498382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.498415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.498516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.498549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.498670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.498703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.498850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.498892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.499176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.499213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.499454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.499487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.499623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.499656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.499759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.499793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.499927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.499960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.500161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.500194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.500298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.500333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Write completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 Read completed with error (sct=0, sc=8) 00:37:44.829 starting I/O failed 00:37:44.829 [2024-11-10 00:11:10.500998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:44.829 [2024-11-10 00:11:10.501186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.501250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.501404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.501441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.501671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.501708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.501821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.501857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.502039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.502075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.502176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.502210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.502392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.502621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.502670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.502816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.502875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.829 [2024-11-10 00:11:10.502998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.829 [2024-11-10 00:11:10.503035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.829 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.503222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.503257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.503369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.503403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.503602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.503637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.503746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.503785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.503933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.503969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.504069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.504103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.504212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.504246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.504390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.504429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.504575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.504620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.504740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.504775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.504906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.504941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.505068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.505108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.505219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.505254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.505372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.505407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.505541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.505578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.505699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.505734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.505869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.505903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.506044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.506078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.506300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.506359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.506543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.506577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.506701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.506852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.506920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.507141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.507177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.507321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.507356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.507464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.507499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.507653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.507688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.507822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.507857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.507968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.508004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.508168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.508205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.508345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.508381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.508515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.508550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.508705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.508741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.508845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.508879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.509011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.509062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.509278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.509329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.509490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.509524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.509634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.509669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.509809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.509861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.510040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.510093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.510253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.510292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.510432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.510466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.510633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.510814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.510852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.511021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.511055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.511189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.511230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.511338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.511373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.511508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.511541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.511681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.511716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.511825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.511859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.512001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.512035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.512168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.512203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.512340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.512394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.512544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.512580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.512728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.830 [2024-11-10 00:11:10.512762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.830 qpair failed and we were unable to recover it. 00:37:44.830 [2024-11-10 00:11:10.512897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.512932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.513082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.513320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.513353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.513500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.513535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.513687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.513735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.513898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.513962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.514158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.514199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.514389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.514428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.514620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.514656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.514789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.514838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.514989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.515040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.515189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.515227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.515388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.515432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.515567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.515608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.515723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.515758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.515960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.515997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.516175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.516211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.516322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.516360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.516524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.516560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.516735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.516784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.516962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.517017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.517199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.517257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.517444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.517479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.517648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.517684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.517827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.517863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.517991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.518028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.518262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.518300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.518462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.518495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.518630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.518664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.518800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.518833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.519007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.519041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.519152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.519185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.519327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.519360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.519485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.519521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.519693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.519743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.519870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.519918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.520068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.520103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.520231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.831 [2024-11-10 00:11:10.520269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.831 qpair failed and we were unable to recover it. 00:37:44.831 [2024-11-10 00:11:10.520403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.520636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.520670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.520792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.520841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.521048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.521103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.521265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.521305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.521456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.521496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.521657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.521693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.521812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.521849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.522008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.522044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.522146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.522182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.522316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.522349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.522531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.522605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.522739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.522789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.522924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.522960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.523243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.523298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.523490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.523524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.523651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.523686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.523809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.523844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.524033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.524067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.524215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.524250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.524397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.524461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.524619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.524657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.524767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.524801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.524915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.524949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.525060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.525095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.525256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.525294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.525448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.525501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.525635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.525684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.525848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.525904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.526046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.526080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.526310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.526344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.526537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.526570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.526681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.526715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.526855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.526889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.527056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.527089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.527231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.527264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.527430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.527463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.527601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.527651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.527798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.527835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.832 [2024-11-10 00:11:10.527944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.832 [2024-11-10 00:11:10.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.832 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.528175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.528226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.528363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.528397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.528543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.528578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.528718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.528752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.528875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.528943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.529180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.529217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.529349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.529383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.529561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.529604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.529742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.529777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.529909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.529947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.530132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.530166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.530302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.530342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.530450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.530486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.530687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.530737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.530900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.530941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.531160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.531194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.531307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.531340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.531476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.531513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.531681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.531731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.531878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.531919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.532079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.532113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.532223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.532257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.532364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.532399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.532550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.532626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.532748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.532783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.533002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.533035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.533168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.533206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.533371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.533408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.533578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.533633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.533752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.533787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.533931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.533979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.534262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.534320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.534499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.534535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.534691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.534727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.534878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.534927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.535044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.535079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.535204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.535238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.535340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.833 [2024-11-10 00:11:10.535373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.833 qpair failed and we were unable to recover it. 00:37:44.833 [2024-11-10 00:11:10.535502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.535551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.535723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.535772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.535953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.535989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.536155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.536189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.536451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.536509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.536668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.536713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.536832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.536891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.537061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.537200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.537234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.537385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.537419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.537546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.537580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.537718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.537761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.537929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.537964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.538128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.538181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.538385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.538419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.538547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.538601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.538766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.538799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.538902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.538964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.539144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.539181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.539369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.539402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.539506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.539539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.539658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.539693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.539854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.539887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.540016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.540049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.540288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.540345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.540495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.540528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.540663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.540712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.540873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.540922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.541090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.541153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.541419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.541454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.541582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.541624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.541763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.541798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.541943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.541994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.542189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.542251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.542440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.542477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.542641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.542677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.542802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.542850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.543056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.543091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.834 [2024-11-10 00:11:10.543236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.834 [2024-11-10 00:11:10.543288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.834 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.543422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.543475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.543654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.543689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.543810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.543865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.544025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.544309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.544344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.544459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.544494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.544683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.544733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.544879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.544916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.545078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.545113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.545245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.545278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.545415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.545449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.545580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.545625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.545727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.545761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.545921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.545954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.546056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.546090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.546238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.546277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.546481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.546519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.546675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.546711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.546854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.546887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.547033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.547071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.547244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.547294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.547402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.547436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.547604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.547653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.547782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.547830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.548116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.548177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.548380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.548415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.548520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.548555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.548708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.548745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.548961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.549014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.549258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.549315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.549505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.549540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.835 [2024-11-10 00:11:10.549667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.835 [2024-11-10 00:11:10.549703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.835 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.549861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.549909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.550032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.550069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.550319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.550354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.550516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.550657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.550706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.550854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.550898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.551042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.551077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.551301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.551369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.551520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.551558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.551705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.551754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.551884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.551934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.552162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.552222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.552369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.552408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.552543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.552578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.552743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.552791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.553182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.553245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.553457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.553525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.553721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.553755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.553886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.553920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.554062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.554096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.554220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.554254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.554401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.554564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.554612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.554778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.554812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.554965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.554999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.555152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.555376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.555409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.555559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.555617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.555781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.555830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.555982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.556020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.556151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.556186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.556349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.556383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.556515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.556549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.556675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.556711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.556850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.556883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.557046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.557084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.557227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.557272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.836 [2024-11-10 00:11:10.557454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.836 [2024-11-10 00:11:10.557491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.836 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.557655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.557690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.557808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.557842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.558022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.558056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.558168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.558212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.558370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.558407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.558594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.558628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.558751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.558800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.558937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.558991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.559179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.559235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.559379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.559416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.559599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.559668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.559829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.559877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.560136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.560175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.560316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.560349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.560478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.560523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.560668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.560702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.560871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.560938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.561135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.561201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.561348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.561400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.561520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.561557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.561770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.561819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.562029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.562070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.562311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.562346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.562506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.562540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.562692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.562726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.562842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.562875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.563088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.563122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.563239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.563274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.563445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.563481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.563673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.563722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.563872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.563921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.564064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.564101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.564294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.564346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.564497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.564532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.564688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.564737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.564875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.564917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.565110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.565179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.565415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.565450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.565550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.837 [2024-11-10 00:11:10.565597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.837 qpair failed and we were unable to recover it. 00:37:44.837 [2024-11-10 00:11:10.565736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.565770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.565896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.565933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.566160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.566217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.566376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.566413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.566600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.566634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.566791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.566840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.567012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.567230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.567269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.567427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.567480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.567678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.567713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.567850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.567884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.568035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.568073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.568294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.568328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.568484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.568519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.568678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.568715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.568838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.568886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.569034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.569070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.569228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.569266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.569392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.569430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.569580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.569621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.569764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.569799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.569947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.569985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.570178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.570235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.570377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.570411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.570579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.570627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.570813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.570863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.571059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.571113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.571256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.571296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.571490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.571524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.571636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.571670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.571780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.571814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.571940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.571977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.572160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.572218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.572360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.572400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.572518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.572569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.572712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.572747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.572879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.572931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.573084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.573142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.838 qpair failed and we were unable to recover it. 00:37:44.838 [2024-11-10 00:11:10.573264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.838 [2024-11-10 00:11:10.573315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.573485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.573523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.573657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.573691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.573824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.573857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.573961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.574011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.574188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.574224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.574397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.574434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.574569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.574630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.574780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.574828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.575016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.575083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.575303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.575472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.575508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.575631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.575667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.575857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.575909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.576142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.576200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.576482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.576537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.576700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.576735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.576866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.576900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.577138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.577193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.577359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.577393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.577560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.577602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.577735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.577770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.577903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.577956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.578112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.578208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.578368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.578430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.578608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.578644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.578804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.578843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.579019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.579056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.579200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.579275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.579394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.579427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.579549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.579582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.579732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.579766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.579873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.579945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.580099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.580152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.580354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.580498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.580532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.580703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.580737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.580862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.580914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.839 qpair failed and we were unable to recover it. 00:37:44.839 [2024-11-10 00:11:10.581054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.839 [2024-11-10 00:11:10.581106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.581250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.581284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.581452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.581486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.581620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.581658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.581787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.581820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.581922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.581955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.582084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.582120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.582259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.582293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.582424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.582479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.582639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.582673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.582781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.582814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.582950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.582983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.583128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.583161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.583319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.583355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.583511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.583546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.583710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.583764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.583923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.583964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.584131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.584169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.584292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.584325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.584479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.584686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.584719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.584840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.584894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.585088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.585140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.585392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.585448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.585556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.585601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.585761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.585817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.585989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.586043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.586236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.586284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.586433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.586471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.586643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.586679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.586868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.586927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.587037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.587073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.587272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.587310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.587508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.587542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.587689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.840 [2024-11-10 00:11:10.587738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.840 qpair failed and we were unable to recover it. 00:37:44.840 [2024-11-10 00:11:10.587932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.587981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.588196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.588258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.588421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.588476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.588647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.588683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.588827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.588877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.589052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.589090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.589256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.589296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.589461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.589501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.589710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.589839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.589889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.590029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.590075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.590191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.590224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.590412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.590450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.590610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.590672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.590778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.590816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.590975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.591029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.591156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.591207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.591369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.591408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.591607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.591759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.591794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.591955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.591994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.592136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.592174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.592384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.592591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.592628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.592786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.592840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.593036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.593070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.593229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.593263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.593414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.593462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.593611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.593647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.593781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.593827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.593971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.594024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.594206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.594239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.594380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.594434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.594645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.594694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.594851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.594900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.595051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.595106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.595365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.595422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.595584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.841 [2024-11-10 00:11:10.595632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.841 qpair failed and we were unable to recover it. 00:37:44.841 [2024-11-10 00:11:10.595773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.595806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.595993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.596046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.596233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.596289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.596474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.596507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.596638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.596673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.596806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.596839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.597034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.597095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.597379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.597577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.597632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.597788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.597838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.597974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.598016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.598217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.598256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.598409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.598454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.598583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.598639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.598784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.598832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.598975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.599023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.599161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.599201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.599359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.599394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.599562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.599605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.599723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.599758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.599898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.599933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.600135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.600188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.600349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.600388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.600534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.600572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.600748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.600796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.600925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.600980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.601128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.601166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.601292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.601336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.601495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.601529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.601721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.601773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.602004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.602064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.602278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.602345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.602527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.602565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.602730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.602764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.602931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.602985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.603172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.603212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.603416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.603493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.603686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.603728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.603910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.603948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.604148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.842 [2024-11-10 00:11:10.604186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.842 qpair failed and we were unable to recover it. 00:37:44.842 [2024-11-10 00:11:10.604330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.604368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.604544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.604595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.604766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.604815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.604974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.605027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.605292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.605491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.605525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.605690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.605725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.605898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.605952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.606186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.606233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.606509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.606576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.606721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.606761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.606998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.607035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.607370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.607427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.607583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.607631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.607756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.607789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.607929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.607961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.608199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.608258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.608412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.608446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.608616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.608652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.608785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.608819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.609051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.609110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.609325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.609394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.609563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.609609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.609791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.609839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.610100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.610442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.610500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.610642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.610678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.610813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.610847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.611017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.611071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.611352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.611420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.611574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.611635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.611779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.611814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.611997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.612053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.612251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.612317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.612522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.612697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.612747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.612923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.843 [2024-11-10 00:11:10.612976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.843 qpair failed and we were unable to recover it. 00:37:44.843 [2024-11-10 00:11:10.613238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.613283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.613431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.613469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.613616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.613650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.613821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.613869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.614064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.614124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.614377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.614435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.614577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.614621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.614748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.614798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.614931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.614986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.615154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.615189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.615406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.615467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.615630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.615679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.615839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.615905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.616057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.616162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.616379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.616435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.616580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.616810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.616845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.617059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.617108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.617315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.617373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.617531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.617564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.617750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.617799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.617971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.618024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.618292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.618352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.618500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.618539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.618693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.618728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.618889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.618923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.619182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.619220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.619350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.619388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.619549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.619597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.619762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.619811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.619996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.620036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.620221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.620295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.620477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.620514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.620675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.620710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.844 [2024-11-10 00:11:10.620816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.844 [2024-11-10 00:11:10.620849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.844 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.620956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.620989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.621186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.621223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.621422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.621458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.621664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.621698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.621835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.621884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.622076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.622112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.622333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.622370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.622514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.622551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.622720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.622769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.622946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.623005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.623234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.623275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.623470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.623508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.623676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.623712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.623820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.623854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.624074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.624131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.624317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.624423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.624636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.624753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.624787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.624912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.624956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.625182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.625240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.625362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.625400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.625537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.625733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.625766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.625938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.625993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.626152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.626191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.626347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.626385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.626546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.626584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.626760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.626809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.626973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.627032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.627219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.627271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.627391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.627425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.627570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.627630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.627774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.627808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.627917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.627953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.628092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.628125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.628258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.628292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.628430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.845 [2024-11-10 00:11:10.628463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.845 qpair failed and we were unable to recover it. 00:37:44.845 [2024-11-10 00:11:10.628631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.628665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.628794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.628862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.629042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.629096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.629248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.629307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.629445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.629479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.629621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.629655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.629779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.629816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.630002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.630039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.630158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.630195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.630342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.630379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.630530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.630563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.630732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.630780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.630960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.631000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.631139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.631193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.631366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.631403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.631528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.631565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.631731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.631765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.631933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.632079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.632116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.632291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.632327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.632469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.632508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.632676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.632717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.632843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.632876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.633008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.633060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.633174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.633411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.633448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.633622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.633688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.633814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.633855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.634050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.634087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.634251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.634348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.634478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.634511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.634678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.634712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.634875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.634912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.635089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.635142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.635368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.635409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.635563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.635784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.635819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.635957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.635991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.636136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.636175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.636347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.636393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.846 qpair failed and we were unable to recover it. 00:37:44.846 [2024-11-10 00:11:10.636554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.846 [2024-11-10 00:11:10.636599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.636705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.636741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.636844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.636877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.637129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.637185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.637376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.637434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.637566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.637610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.637763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.637811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.638001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.638055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.638258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.638317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.638486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.638520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.638686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.638721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.638841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.638898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.639057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.639301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.639338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.639509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.639546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.639719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.639753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.639887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.639936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.640189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.640244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.640358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.640403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.640562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.640602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.640747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.640782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.640933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.640984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.641208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.641265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.641495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.641551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.641735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.641951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.642012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.642160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.642231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.642401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.642438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.642608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.642644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.642759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.642793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.642919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.642971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.643155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.643221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.643419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.643455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.643608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.643655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.643795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.643834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.643983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.644029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.644174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.644211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.644495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.644553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.644737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.847 [2024-11-10 00:11:10.644774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.847 qpair failed and we were unable to recover it. 00:37:44.847 [2024-11-10 00:11:10.644960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.645016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.645158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.645217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.645396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.645451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.645635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.645687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.645820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.645854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.645976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.646024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.646195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.646234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.646430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.646467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.646608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.646780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.646818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.646993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.647027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.647129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.647163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.647305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.647341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.647497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.647549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.647696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.647733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.647844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.647896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.648022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.648073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.648248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.648286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.648465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.648519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.648655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.648692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.648813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.648848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.648959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.648993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.649138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.649175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.649363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.649425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.649627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.649753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.649790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.649965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.650149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.650221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.650408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.650473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.650597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.650633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.650796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.650835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.651034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.651092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.651301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.651360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.651504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.651541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.651720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.651754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.651919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.651975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.652149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.652202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.848 qpair failed and we were unable to recover it. 00:37:44.848 [2024-11-10 00:11:10.652371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.848 [2024-11-10 00:11:10.652408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.652603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.652638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.652787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.652835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.653013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.653067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.653209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.653310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.653489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.653527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.653689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.653844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.653879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.654014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.654065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.654218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.654269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.654450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.654499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.654664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.654699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.654846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.654888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.655058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.655096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.655240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.655287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.655494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.655537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.655692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.655741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.655946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.656000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.656218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.656278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.656436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.656484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.656639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.656673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.656807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.656840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.657011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.657048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.657239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.657314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.657472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.657509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.657671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.657705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.657851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.657886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.658027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.658064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.658203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.658240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.658400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.658438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.658598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.658652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.658833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.658882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.659070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.659138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.659359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.659415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.659573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.659624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.659812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.659855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.849 [2024-11-10 00:11:10.659993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.849 [2024-11-10 00:11:10.660041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.849 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.660188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.660290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.660324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.660491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.660528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.660693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.660742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.660908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.660962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.661113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.661154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.661311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.661351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.661522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.661573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.661705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.661738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.661909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.661962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.662111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.662151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.662318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.662355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.662475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.662512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.662672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.662707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.662858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.662908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.663122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.663184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.663355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.663409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.663516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.663549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.663710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.663760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.663909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.663950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.664106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.664146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.664283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.664335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.664526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.664579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.664718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.664773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.664978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.665034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.665298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.665359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.665517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.665555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.665724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.665762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.665916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.665956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.666168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.666227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.666436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.666495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.666646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.666681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.666816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.666850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.666988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.667022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.667171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.667221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.667380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.667418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.667629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.667679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.667815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.667850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.668002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.850 [2024-11-10 00:11:10.668040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.850 qpair failed and we were unable to recover it. 00:37:44.850 [2024-11-10 00:11:10.668247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.668285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.668488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.668525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.668669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.668710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.668846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.668879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.668985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.669037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.669160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.669197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.669372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.669411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.669600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.669649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.669766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.669802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.669949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.670001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.670248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.670300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.670441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.670475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.670584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.670624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.670752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.670786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.670932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.670966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.671075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.671108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.671243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.671282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.671429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.671478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.671614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.671651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.671795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.671844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.671980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.672017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.672151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.672298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.672334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.672456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.672505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.672691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.672870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.672906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.673055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.673092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.673233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.673271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.673449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.673488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.673625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.673662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.673835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.673883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.674000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.674036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.674162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.674200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.674400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.674437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.674554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.674599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.674783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.674832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.675015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.675056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.675294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.675333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.675518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.675556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.675746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.851 qpair failed and we were unable to recover it. 00:37:44.851 [2024-11-10 00:11:10.675955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.851 [2024-11-10 00:11:10.676017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.676281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.676341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.676456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.676492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.676666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.676703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.676806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.676840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.676959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.677008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.677185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.677221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.677441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.677480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.677634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.677848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.677897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.678044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.678101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.678295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.678354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.678472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.678512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.678651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.678705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.678838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.678883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.679018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.679051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.679180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.679224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.679378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.679415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.679566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.679632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.679749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.679784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.679891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.679925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.680087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.680121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.680254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.680287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.680477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.680513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.680700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.680741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.680876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.680931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.681120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.681177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.681325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.681380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.681489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.681524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.681684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.681740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.681937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.681989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.682176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.682228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.682425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.682583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.682638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.682777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.682826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.852 [2024-11-10 00:11:10.683134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.852 [2024-11-10 00:11:10.683193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.852 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.683399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.683456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.683583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.683655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.683817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.683850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.684028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.684066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.684235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.684301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.684436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.684473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.684638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.684688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.684821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.684870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.685003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.685055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.685195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.685233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.685449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.685491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.685693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.685832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.685867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.686003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.686039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.686169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.686222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.686397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.686437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.686604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.686639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.686747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.686781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.686956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.686993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.687233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.687389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.687432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.687627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.687663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.687821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.687875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.688056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.688109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.688325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.688383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.688536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.688758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.688793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.688959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.689026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.689215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.689256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.689518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.689577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.689775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.689809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.689911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.689945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.690104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.690142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.690398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.690456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.690656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.853 [2024-11-10 00:11:10.690841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.853 [2024-11-10 00:11:10.690890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.853 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.691034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.691075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.691200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.691236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.691414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.691450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.691602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.691654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.691805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.691868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.692143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.692203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.692364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.692412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.692565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.692610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.692789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.692822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.693049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.693086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.693240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.693276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.693433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.693480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.693666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.693716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.693855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.693904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.694105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.694145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.694296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.694336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.694465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.694519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.694707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.694757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.694906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.694940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.695193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.695264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.695493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.695530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.695661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.695697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.695816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.695868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.696023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.696073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.696212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.696251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.696393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.696428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.696577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.696620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.696757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.696791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.696948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.696981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.697118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.697151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.697285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.697318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.697466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.697502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.697663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.697699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.697864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.697926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.698082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.698120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.698277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.698314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.698427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.698463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.698628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.698802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.854 [2024-11-10 00:11:10.698851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.854 qpair failed and we were unable to recover it. 00:37:44.854 [2024-11-10 00:11:10.699101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.699143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.699318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.699358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.699499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.699537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.699716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.699766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.699888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.699944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.700148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.700210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.700393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.700466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.700646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.700681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.700793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.700826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.701034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.701095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.701318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.701386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.701494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.701549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.701678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.701714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.701870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.701919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.702216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.702286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.702476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.702516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.702669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.702704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.702859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.702893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.703075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.703112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.703285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.703339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.703482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.703520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.703739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.703873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.703906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.704021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.704058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.704180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.704217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.704334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.704376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.704549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.704626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.704784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.704839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.704991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.705031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.705182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.705219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.705401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.705438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.705612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.705678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.705814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.705849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.706056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.706387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.706443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.706613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.706649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.706799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.706854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.707018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.707081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.707234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.855 [2024-11-10 00:11:10.707285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.855 qpair failed and we were unable to recover it. 00:37:44.855 [2024-11-10 00:11:10.707397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.707432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.707560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.707628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.707767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.707840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.708004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.708045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.708256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.708319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.708468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.708507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.708665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.708702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.708864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.708917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.709057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.709107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.709382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.709440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.709633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.709668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.709776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.709974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.710012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.710176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.710225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.710416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.710471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.710616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.710652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.710805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.710859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.711019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.711075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.711336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.711393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.711535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.711571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.711701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.711734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.711912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.711949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.712199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.712255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.712429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.712466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.712596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.712647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.712754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.712787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.713039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.713108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.713300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.713377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.713542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.713576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.713717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.713750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.713931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.713965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.714073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.714111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.714367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.714427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.714579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.714639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.714780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.714952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.715022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.715249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.715314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.715517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.715555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.715737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.856 [2024-11-10 00:11:10.715772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.856 qpair failed and we were unable to recover it. 00:37:44.856 [2024-11-10 00:11:10.715921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.715958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.716227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.716435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.716472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.716617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.716667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.716805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.716838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.717028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.717083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.717328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.717368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.717561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.717624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.717742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.717781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.717904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.717940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.718069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.718121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.718344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.718400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.718608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.718679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.718822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.718860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.719071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.719133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.719360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.719418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.719610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.719645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.719800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.719834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.720097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.720134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.720335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.720394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.720564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.720609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.720765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.720813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.720942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.720991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.721186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.721249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.721390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.721424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.721624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.721675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.721829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.721878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.722103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.722172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.722438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.722504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.722625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.722678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.722824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.722995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.723029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.723224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.723263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.723403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.723442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.857 qpair failed and we were unable to recover it. 00:37:44.857 [2024-11-10 00:11:10.723617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.857 [2024-11-10 00:11:10.723651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.723751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.723784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.723924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.723961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.724217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.724274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.724406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.724446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.724637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.724672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.724776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.724811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.725001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.725073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.725313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.725367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.725543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.725603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.725735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.725771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.725966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.726119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.726156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.726381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.726455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.726671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.726719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.726864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.726902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.727037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.727073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.727177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.727211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.727371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.727410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.727561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.727603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.727717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.727752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.727969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.728209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.728286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.728435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.728472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.728584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.728653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.728818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.728851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.728954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.728987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.729100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.729133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.729279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.729315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.729440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.729477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.729626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.729692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.729819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.729868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.730030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.730084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.730195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.730236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.730396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.730451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.730652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.730702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.730847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.730899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.731075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.731112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.731243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.858 [2024-11-10 00:11:10.731296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.858 qpair failed and we were unable to recover it. 00:37:44.858 [2024-11-10 00:11:10.731475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.731512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.731683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.731732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.731905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.731975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.732171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.732225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.732398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.732452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.732652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.732710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.732890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.732944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.733125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.733184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.733367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.733422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.733573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.733617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.733760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.733793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.734004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.734058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.734348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.734387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.734505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.734710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.734755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.734870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.734904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.735093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.735166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.735321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.735390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.735528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.735561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.735757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.735806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.736016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.736070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.736325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.736366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.736482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.736520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.736690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.736725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.736838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.736873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.737015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.737069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.737272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.737341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.737490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.737537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.737705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.737741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.737884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.737925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.738052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.738088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.738242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.738306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.738451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.738488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.738674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.738724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.738852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.738916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.739075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.739111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.739253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.739288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.739434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.739483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.739629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.739680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.739824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.859 [2024-11-10 00:11:10.739883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.859 qpair failed and we were unable to recover it. 00:37:44.859 [2024-11-10 00:11:10.740072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.740120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.740285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.740335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.740495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.740533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.740680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.740716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.740884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.740918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.741185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.741244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.741379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.741416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.741559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.741606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.741776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.741820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.741958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.741994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.742134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.742186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.742332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.742380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.742551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.742584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.742723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.742756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.742932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.742986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.743127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.743181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.743350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.743388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.743531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.743593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.743782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.743831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.744037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.744091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.744272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.744332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.744508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.744547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.744676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.744712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.744819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.744854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.745040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.745087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.745274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.745311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.745429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.745466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.745648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.745682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.745791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.745825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.745992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.746037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.746213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.746261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.746408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.746446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.746607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.746641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.860 qpair failed and we were unable to recover it. 00:37:44.860 [2024-11-10 00:11:10.746780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.860 [2024-11-10 00:11:10.746819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.746974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.747023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.747224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.747263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.747463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.747501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.747664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.747700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.747832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.747870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.748042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.748109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.748265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.748324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.748465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.748505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.748666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.748712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.748865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.748898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.749050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.749099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.749287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.749326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.749443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.749480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.749676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.749715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.749885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.749920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.750115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.750154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.750270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.750307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.750449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.750486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.750680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.750715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.750811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.750846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.751020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.751077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.751238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.751293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.751473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.751526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.751695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.751731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.751897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.751938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.752103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.752144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.752334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.752408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.752577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.752645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.752766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.752804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.752945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.752979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.753144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.753182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.753344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.753381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.753536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.753571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.753807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.753841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.753981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.754015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.754150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.754188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.754349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.754387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.754568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.754649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.754817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.861 [2024-11-10 00:11:10.754865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.861 qpair failed and we were unable to recover it. 00:37:44.861 [2024-11-10 00:11:10.754980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.755019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.755172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.755227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.755428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.755482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.755674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.755711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.755879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.755913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.756099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.756132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.756314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.756376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.756556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.756602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.756798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.756835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.756963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.756998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.757149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.757185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.757394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.757432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.757549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.757594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.757778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.757831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.757977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.758028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.758194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.758306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.758342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.758472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.758506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.758621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.758655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.758816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.758848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.758993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.759030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.759175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.759207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.759350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.759384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.759551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.759584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.759725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.759757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.759921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.759957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.760183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.760246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.760428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.760464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.760592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.760650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.760800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.760854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.761010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.761062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.862 [2024-11-10 00:11:10.761255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.862 [2024-11-10 00:11:10.761311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.862 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.761462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.761518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.761693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.761731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.761948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.762129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.762216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.762375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.762415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.762580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.762621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.762753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.762791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.762929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.762968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.763148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.763185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.763366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.763403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.763516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.763556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.763757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.763805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.764010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.764061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.764240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.764278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.764442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.764480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.764621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.764670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.764839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.764890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.765058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.765100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.765359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.765401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.765602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.765639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.765754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.765791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.765962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.766002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.766192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.766291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.766453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.766499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.766624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.766678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.766818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.766854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.767031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.767074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.767277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.767337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.767500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.767537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.767673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.767711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.767849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.767888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.768079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.768120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.768300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.768337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.768494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.768536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.768727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.768774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.768953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.768992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.769139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.769218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.769522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.769594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.769774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.863 [2024-11-10 00:11:10.769808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.863 qpair failed and we were unable to recover it. 00:37:44.863 [2024-11-10 00:11:10.769975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.770015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.770195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.770232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.770374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.770408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.770601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.770781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.770815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.771020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.771147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.771198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.771399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.771441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.771557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.771621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.771761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.771809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.771952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.771985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.772150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.772184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.772343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.772376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.772508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.772556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.772717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.772768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.772923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.772982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.773177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.773217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.773366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.773416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.773538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.773604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.773768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.773803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.773931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.773978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.774129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.774177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.774342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.774575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.774634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.774832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.774899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.775111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.775212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.775484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.775540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.775750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.775785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.775933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.775981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.776091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.776128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.776284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.776388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.776533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.776566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.776731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.776772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.776922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.776976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.777123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.777164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.777296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.777337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.777455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.777506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.777662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.864 [2024-11-10 00:11:10.777811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.864 [2024-11-10 00:11:10.777850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.864 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.778048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.778086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.778299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.778352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.778504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.778541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.778700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.778749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.778926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.778986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.779128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.779185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.779330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.779383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.779500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.779536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.779733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.779787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.779943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.779981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.780174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.780239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.780532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.780598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.780758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.780796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.781008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.781050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.781297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.781354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.781555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.781607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.781763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.781820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.782003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.782041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.782200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.782256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.782426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.782460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.782617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.782654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.782799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.782882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.783036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.783073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.783172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.783206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.783349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.783383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.783504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.783541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.783690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.783731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.783891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.783930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.784176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.784243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.784381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.784416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.784521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.784555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.784756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.784811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.784967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.785019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.785262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.785317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.785466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.785507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.785683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.785737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.785909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.785949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.786136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.786201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.786463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.865 [2024-11-10 00:11:10.786528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.865 qpair failed and we were unable to recover it. 00:37:44.865 [2024-11-10 00:11:10.786745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.786799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.786931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.786983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.787179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.787230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.787394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.787428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.787567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.787630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.787803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.787856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.787973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.788008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.788175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.788209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.788353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.788389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.788499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.788542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.788691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.788726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.788876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.788913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.789071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.789112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.789280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.789318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.789442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.789480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.789639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.789677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.789873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.789926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.790126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.790318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.790356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.790509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.790543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.790672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.790706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.790821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.790874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.791061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.791100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.791228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.791267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.791414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.791452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.791662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.791712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.791885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.791954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.792115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.792180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.792361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.792414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.792574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.792616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.792740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.792781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.792951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.793014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.793197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.793235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.793340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.793404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.793542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.866 qpair failed and we were unable to recover it. 00:37:44.866 [2024-11-10 00:11:10.793765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.866 [2024-11-10 00:11:10.793800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.794001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.794050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.794202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.794253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.794451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.794626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.794670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.794819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.794874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.795051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.795091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.795241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.795279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.795409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.795444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.795600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.795636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.795790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.795842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.796021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.796073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.796237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.796271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.796427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.796464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.796561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.796607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.796716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.796750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.796919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.796953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.797125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.797168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.797314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.797347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.797494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.797530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.797693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.797742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.797917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.797977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.798170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.798210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.798455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.798493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.798660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.798695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.798873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.798919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.799176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.799232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.799404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.799438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.799628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.799679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.799815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.799849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.799994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.800032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.800182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.800219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.800414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.800450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.800647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.800682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.800868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.800905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.801126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.801164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.801314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.801350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.801482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.867 [2024-11-10 00:11:10.801519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.867 qpair failed and we were unable to recover it. 00:37:44.867 [2024-11-10 00:11:10.801707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.801757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.801947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.801996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.802122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.802162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.802324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.802362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.802517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.802553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.802728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.802763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.802913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.802957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.803149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.803232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.803414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.803456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.803632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.803683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.803788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.803822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.803957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.804018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.804187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.804241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.804482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.804518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.804712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.804746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.804858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.804894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.805097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.805134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.805249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.805299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.805434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.805472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.805637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.805863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.805900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.806041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.806081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.806290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.806344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.806475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.806510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.806633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.806667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.806777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.806814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.806990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.807023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.807126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.807160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.807295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.807328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.807493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.807543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.807682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.807731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.807911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.807965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.808176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.808233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.808473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.808518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.808719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.808758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.808901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.808936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.809151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.809216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.868 qpair failed and we were unable to recover it. 00:37:44.868 [2024-11-10 00:11:10.809461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.868 [2024-11-10 00:11:10.809516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.809700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.809736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.809869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.809930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.810088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.810184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.810387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.810422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.810535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.810573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.810735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.810783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.810930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.810971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.811133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.811168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.811304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.811354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.811467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.811501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.811675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.811732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.811903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.811957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.812115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.812158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.812426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.812496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.812633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.812668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.812778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.812811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.812933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.812974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.813157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.813196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.813360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.813398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.813566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.813611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.813775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.813822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.814047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.814103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.814387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.814445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.814601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.814639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.814748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.814785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.814928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.814982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.815208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.815266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.815498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.815553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.815699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.815735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.815951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.816010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.816239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.816277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.816411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.816465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.816601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.816650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.816778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.816817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.817002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.817056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.817276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.817336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.869 [2024-11-10 00:11:10.817545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.869 [2024-11-10 00:11:10.817616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.869 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.817792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.817841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.818067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.818116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.818363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.818404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.818566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.818615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.818823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.818884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.819124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.819201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.819433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.819475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.819649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.819685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.819848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.819885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.820012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.820066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.820277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.820346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.820477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.820521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.820689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.820724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.820832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.820869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.821020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.821065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.821185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.821222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.821373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.821411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.821599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.821636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.821777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.821821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.821961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.822003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.822149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.822186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.822345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.822383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.822548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.822609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.822771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.822998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.823054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.823264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.823326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.823445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.823480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.823665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.823725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.823853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.823911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.824187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.824253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.824485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.824523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.824722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.824758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.824910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.824952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.825166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.825204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.825439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.825474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.825605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.825640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.825783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.825817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.825981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.826018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.870 [2024-11-10 00:11:10.826227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.870 [2024-11-10 00:11:10.826297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.870 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.826535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.826601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.826736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.826774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.826958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.826998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.827166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.827205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.827429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.827464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.827640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.827674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.827785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.827825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.828020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.828084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.828239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.828279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.828473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.828546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.828694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.828730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.828883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.828929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.829096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.829159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.829373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.829437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.829564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.829611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.829783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.829841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.829998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.830036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.830253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.830291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.830466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.830500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.830645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.830679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.830818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.830854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.830986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.831019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.831197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.831238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.831389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.831429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.831605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.831672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.831856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.831921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.832066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.832120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.832283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.832346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.832488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.832524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.832690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.832726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.832842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.832877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.833013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.833047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.833227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.833387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.833422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.833599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.833636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.833785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.833838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.833988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.834048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.834179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.871 [2024-11-10 00:11:10.834231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.871 qpair failed and we were unable to recover it. 00:37:44.871 [2024-11-10 00:11:10.834373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.834408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.834550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.834600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.834787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.834842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.835046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.835085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.835242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.835315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.835497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.835536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.835734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.835789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.835982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.836048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.836206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.836261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.836435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.836469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.836624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.836704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.836872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.836927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.837071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.837109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.837329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.837389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.837561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.837637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.837756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.837791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.837925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.837965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.838143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.838196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.838481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.838521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.838680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.838714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.838845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.838890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.839073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.839111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.839366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.839599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.839667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.839821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.839859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.840026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.840091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.872 [2024-11-10 00:11:10.840283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.872 [2024-11-10 00:11:10.840345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.872 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.840457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.840492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.840675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.840730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.840917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.840978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.841136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.841199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.841366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.841400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.841548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.841605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.841783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.841838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.842025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.842234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.842272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.842425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.842475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.842616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.842669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.842808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.842843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.842951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.843004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.843159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.843196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.843381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.843611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.843664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.843812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.843854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.843994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.844033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.844249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.844286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.844422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.844471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.844664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.844700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.844821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.844859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.845143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.845198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.845374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.845412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.845547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.845585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.845757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.845791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.845964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.846022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.846267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.846332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.846489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.846542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.846717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.846753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.846919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.846956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.847241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.847295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.847436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.847476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.873 qpair failed and we were unable to recover it. 00:37:44.873 [2024-11-10 00:11:10.847669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.873 [2024-11-10 00:11:10.847719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.847916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.847957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.848121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.848160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.848326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.848364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.848511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.848549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.848707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.848757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.848904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.848939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.849126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.849166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.849435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.849491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.849653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.849688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.849827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.849869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.850000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.850062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.850211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.850289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.850465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.850501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.850700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.850736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.850862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.850911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.851071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.851127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.851312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.851364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.851479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.851513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.851695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.851745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.851897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.851936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.852157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.852230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.852437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.852504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.852668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.852705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.852899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.852956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.853123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.853177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.853374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.853431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.853572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.853614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.853761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.853795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.853930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.853965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.854076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.854111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.854221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.854255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.854434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.854490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.854613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.874 [2024-11-10 00:11:10.854650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.874 qpair failed and we were unable to recover it. 00:37:44.874 [2024-11-10 00:11:10.854787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.854822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.855081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.855272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.855334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.855450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.855606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.855645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.855809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.855843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.855973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.856007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.856144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.856178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.856318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.856364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.856506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.856543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.856649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.856693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.856838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.856892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.857026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.857089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.857246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.857282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.857449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.857484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.857638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.857677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.857832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.857867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.858002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.858045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.858186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.858233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.858367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.858404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.858503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.858541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.858697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.858746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.858859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.858931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.859098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.859173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.859430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.859485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.859612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.859662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.859812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.859850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.860055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.860129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.860392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.860447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.860574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.860617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.860801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.860854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.860995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.861056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.861256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.861304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.875 qpair failed and we were unable to recover it. 00:37:44.875 [2024-11-10 00:11:10.861461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.875 [2024-11-10 00:11:10.861499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.861621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.861654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.861828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.861985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.862198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.862235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.862399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.862436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.862563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.862609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.862758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.862793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.862949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.863001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.863163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.863217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.863444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.863502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.863673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.863726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.863878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.863935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.864039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.864073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.864204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.864243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.864345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.864379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.864540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.864788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.864843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.865094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.865152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.865303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.865366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.865521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.865559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.865730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.865763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.865881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.865917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.866135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.866232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.866480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.866547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.866671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.866855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.866905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.867026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.867061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.867221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.867256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.867402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.876 [2024-11-10 00:11:10.867451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.876 qpair failed and we were unable to recover it. 00:37:44.876 [2024-11-10 00:11:10.867582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.867659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.867826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.867865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.868082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.868137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.868393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.868431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.868551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.868604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.868762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.868798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.868968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.869013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.869291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.869363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.869527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.869562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.869712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.869758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.869864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.869898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.870109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.870164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.870441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.870723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.870759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.870992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.871064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.871260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.871318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.871480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.871514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.871719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.871849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.871917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.872204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.872265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.872510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.872546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.872697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.872744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.872896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.872952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.873168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.873446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.873481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.873645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.873691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.873833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.873869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.874026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.874065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.874222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.874438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.874504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.874669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.874705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.874848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.874911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.875099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.875133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.877 [2024-11-10 00:11:10.875241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.877 [2024-11-10 00:11:10.875295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.877 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.875438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.875478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.875613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.875650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.875807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.875843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.876122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.876181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.876323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.876395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.876536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.876574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.876728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.876763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.876903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.876937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.877100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.877151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.877308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.877345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.877502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.877548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.877699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.877734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.877841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.877876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.878090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.878157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.878301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.878356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.878507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.878544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.878661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.878697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.878808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.878844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.878970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.879023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.879207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.879436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.879494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.879603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.879651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.879828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.879868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.880051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.880088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.880393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.880464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.880654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.880691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.880803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.880839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.880989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.881023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.881123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.881158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.881386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.881447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.881601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.881654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.881805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.881844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.881992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.882046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.878 qpair failed and we were unable to recover it. 00:37:44.878 [2024-11-10 00:11:10.882247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.878 [2024-11-10 00:11:10.882305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.882471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.882505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.882650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.882700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.882875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.882941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.883233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.883316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.883500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.883540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.883719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.883758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.883916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.883975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.884122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.884173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.884335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.884387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.884542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.884604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.884757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.884794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.884916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.884959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.885096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.885148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.885354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.885413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.885611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.885677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.885902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.885968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.886187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.886233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.886449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.886579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.886621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.886761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.886814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.886985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.887204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.887279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.887407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.887445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.887628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.887664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.887824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.887873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.888168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.888235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.888401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.888435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.888578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.888627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.888776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.888826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.889077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.889117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.889397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.889458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.889649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.889684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.889820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.879 [2024-11-10 00:11:10.889855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.879 qpair failed and we were unable to recover it. 00:37:44.879 [2024-11-10 00:11:10.890032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.890071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.890229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.890268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.890392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.890444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.890551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.890602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.890742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.890779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.890893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.890952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.891156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.891225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.891516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.891557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.891697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.891733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.891870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.891904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.892028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.892087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.892286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.892319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.892495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.892530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.892721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.892770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.892922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.892975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.893183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.893223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.893462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.893500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.893652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.893691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.893854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.893891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.894047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.894101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.894286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.894325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.894522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.894560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.894693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.894730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.894896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.894957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.895222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.895284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.895449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.895488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.895705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.895741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.895861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.895911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.896177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.896251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.896389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.896444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.896584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.896626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.896810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.897136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.897192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.880 [2024-11-10 00:11:10.897341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.880 [2024-11-10 00:11:10.897416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.880 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.897595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.897631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.897763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.897797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.897978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.898014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.898231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.898298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.898510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.898566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.898740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.898793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.898973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.899009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.899162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.899214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.899350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.899386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.899554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.899594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.899772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.899828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.899993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.900034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.900267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.900325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.900490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.900524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.900659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.900701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.900830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.900877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.901052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.901090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.901221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.901258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.901373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.901411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.901564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.901622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.901775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.901809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.901956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.902011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.902176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.902228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.902334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.902369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.902506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.902539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.902655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.902691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.902903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.902962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.903231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.903295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.903489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.903524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.881 [2024-11-10 00:11:10.903674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.881 [2024-11-10 00:11:10.903715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.881 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.903864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.903898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.904002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.904061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.904224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.904266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.904447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.904501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.904669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.904704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.904891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.905050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.905087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.905242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.905280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.905523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.905565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.905740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.905773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.905917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.905950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.906148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.906185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.906415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.906457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.906617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.906668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.906807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.906840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.907010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.907044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.907244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.907284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.907430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.907466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.907609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.907660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.907788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.907821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.907990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.908024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.908216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.908254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.908425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.908463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.908602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.908652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.908769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.908803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.908947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.909006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.909167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.909208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.909369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.909420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.909579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.909621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.909799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.910012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.910050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.910161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.910213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.910377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.882 [2024-11-10 00:11:10.910414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.882 qpair failed and we were unable to recover it. 00:37:44.882 [2024-11-10 00:11:10.910540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.910597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.910729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.910763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.910939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.910976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.911103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.911154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.911358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.911394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.911538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.911593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.911725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.911764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.911921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.911956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.912055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.912105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.912294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.912346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.912475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.912509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.912656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.912690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.912793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.912826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.912969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.913002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.913159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.913198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.913368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.913406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.913548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.913600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.913730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.913764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.913903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.913937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.914092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.914128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.914248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.914285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.914438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.914476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.914614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.914671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.914795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.914845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.915017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.915235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.915270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.915418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.915553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.915603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.915709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.915743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.915881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.915915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.916055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.916090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.916247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.916281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.916446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.916481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.883 [2024-11-10 00:11:10.916637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.883 [2024-11-10 00:11:10.916672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.883 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.916816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.916851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.917033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.917068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.917171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.917204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.917335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.917369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.917474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.917507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.917632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.917684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.917811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.917848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.918002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.918038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.918194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.918231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.918383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.918420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.918546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.918596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.918725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.918759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.918899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.918941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.919095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.919135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.919282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.919319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.919467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.919503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.919656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.919691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.919829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.919863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.920011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.920047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.920169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.920207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.920363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.920397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.920542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.920593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.920731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.920765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.920913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.920950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.921124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.921161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.921308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.921361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.921533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.921578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.921691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.921725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.921833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.921892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.922049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.922086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.922228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.922264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.922434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.922467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.922603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.922637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.884 qpair failed and we were unable to recover it. 00:37:44.884 [2024-11-10 00:11:10.922806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.884 [2024-11-10 00:11:10.922840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.923010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.923062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.923175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.923211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.923351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.923388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.923574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.923614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.923750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.923783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.923922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:44.885 [2024-11-10 00:11:10.924156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.924220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.924462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.924530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.924717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.924753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.924919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.924956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.925144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.925219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.925343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.925395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.925555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.925618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.925759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.925793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.925996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.926030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.926310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.926366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.926499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.926531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.926705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.926740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.926912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.926949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.927124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.927162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.927310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.927347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.927485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.927521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.927706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.927755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.927888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.927937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.928075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.928120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.928273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.928323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.928489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.928529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.928692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.928729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.928844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.928887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.929030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.929067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.929275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.929339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.929511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.929549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.929712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.929757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.885 [2024-11-10 00:11:10.929913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.885 [2024-11-10 00:11:10.929965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.885 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.930221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.930465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.930658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.930695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.930836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.930901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.931067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.931106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.931245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.931322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.931479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.931513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.931658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.931707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.931860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.931937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.932076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.932130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.932274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.932321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.932506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.932544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.932714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.932763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.932901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.932941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.933115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.933154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.933418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.933479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.933638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.933674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.933807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.933841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.934098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.934159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.934459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.934516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.934655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.934690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.934829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.934881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.935133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.935190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.935322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.935355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.935521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.935560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.935752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.935801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.935995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.936063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.936271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.936330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.936507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.936545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.936725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.936760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.886 qpair failed and we were unable to recover it. 00:37:44.886 [2024-11-10 00:11:10.936886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.886 [2024-11-10 00:11:10.936954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.937186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.937247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.937507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.937567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.937764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.937798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.937987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.938020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.938231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.938294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.938466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.938514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.938707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.938756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.938901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.938943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.939082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.939123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.939399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.939440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.939606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.939642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.939794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.939843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.940054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.940116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.940315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.940413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.940555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.940604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.940713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.940747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.940909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.940942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.941035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.941086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.941314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.941372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.941532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.941612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.941747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.941781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.941897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.941934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.942139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.942176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.942402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.942474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.942665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.942702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.942851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.942901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.943104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.943167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.943304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.943337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.943533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.943571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.943760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.943798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.943996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.944050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.944319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.944389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.944554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.887 [2024-11-10 00:11:10.944604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.887 qpair failed and we were unable to recover it. 00:37:44.887 [2024-11-10 00:11:10.944767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.944801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.944976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.945029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.945170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.945204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.945360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.945394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.945528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.945562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.945704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.945753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.945922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.945958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.946101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.946260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.946294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.946440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.946474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.946581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.946622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.946778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.946834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.946997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.947094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.947256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.947301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.947438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.947623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.947672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.947830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.947879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.948032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.948069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.948230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.948394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.948428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.948549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.948606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.948723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.948758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.948933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.948975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.949242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.949299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.949426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.949460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.949594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.949629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.949788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.949830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.949962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.950001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.950187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.950239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.950377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.950411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.950602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.950651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.950831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.950884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.951037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.951075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.888 qpair failed and we were unable to recover it. 00:37:44.888 [2024-11-10 00:11:10.951204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.888 [2024-11-10 00:11:10.951255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.951440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.951498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.951686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.951721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.951830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.951865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.952010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.952045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.952223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.952287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.952505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.952544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.952732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.952781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.953032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.953093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.953266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.953304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.953476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.953514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.953668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.953704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.953837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.953871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.954109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.954183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.954400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.954437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.954636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.954672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.954816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.954851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.955113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.955173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.955294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.955344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.955484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.955523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.955702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.955752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.955953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.956008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.956153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.956208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.956371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.956424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.956601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.956636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.956819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.956872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.957021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.957072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.957184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.957219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.957335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.957369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.957493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.957526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.957682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.957720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.957850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.957885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.958024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.958058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.889 [2024-11-10 00:11:10.958194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.889 [2024-11-10 00:11:10.958227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.889 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.958377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.958426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.958571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.958639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.958807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.958843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.958959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.958994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.959192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.959249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.959409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.959443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.959616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.959652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.959837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.959909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.960056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.960097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.960359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.960415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.960537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.960574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.960755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.960875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.960909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.961036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.961087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.961285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.961319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.961488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.961521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.961727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.961872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.961927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.962074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.962129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.962387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.962446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.962657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.962693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.962816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.962865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.963068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.963124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.963381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.963455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.963581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.963651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.963825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.964039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.964104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.964325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.964390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.964543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.964580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.964732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.964765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.964925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.964992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.965214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.965272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.965477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.890 [2024-11-10 00:11:10.965538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.890 qpair failed and we were unable to recover it. 00:37:44.890 [2024-11-10 00:11:10.965671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.965706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.965818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.965853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.966011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.966049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.966280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.966320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.966479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.966513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.966683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.966733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.966893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.966941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.967109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.967149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.967419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.967477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.967611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.967645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.967794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.967843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.968130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.968187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.968444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.968503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.968627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.968679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.968793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.968827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.968949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.968986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.969172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.969231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.969374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.969411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.969598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.969632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.969726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.969759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.969900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.969934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.970056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.970094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.970237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.970275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.970398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.970436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.970639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.970689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.970816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.970864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.971019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.971059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.971247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.971304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.971509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.971546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.971698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.971748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.971941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.971980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.972160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.972197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.972453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.891 [2024-11-10 00:11:10.972510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.891 qpair failed and we were unable to recover it. 00:37:44.891 [2024-11-10 00:11:10.972679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.972714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.972855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.972893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.973022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.973076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.973295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.973333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.973480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.973517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.973703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.973752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.973918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.973984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.974203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.974244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.974408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.974469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.974658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.974692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.974814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.974862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.975044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.975097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.975244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.975282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.975430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.975467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.975617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.975652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.975781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.975829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.976016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.976094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.976354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.976413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.976565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.976612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.976773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.976807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.976993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.977042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.977199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.977299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.977400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.977435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.977563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.977604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.977777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.977812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.977953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.977986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.978163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.978198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.978334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.978368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.978526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.978574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.978697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.892 [2024-11-10 00:11:10.978731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.892 qpair failed and we were unable to recover it. 00:37:44.892 [2024-11-10 00:11:10.978914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.978966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.979181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.979236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.979396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.979430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.979569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.979612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.979813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.979852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.980020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.980078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.980334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.980392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.980528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.980561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.980674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.980707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.980859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.980927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.981064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.981118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.981265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.981350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.981503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.981540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.981703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.981739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.981866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.981935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.982084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.982121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.982265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.982303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.982420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.982457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.982631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.982696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.982822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.982870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.983039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.983094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.983246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.983297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.983435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.983470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.983596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.983666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.983834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.983873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.984057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.984094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.984287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.984350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.984498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.984535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.984696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.984732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.984965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.985024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.985206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.985265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.985399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.985437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.985594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.985632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.985790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.985839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.985993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.986033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.986250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.986458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.986498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.986625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.986678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.986818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.986853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.987070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.893 [2024-11-10 00:11:10.987107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.893 qpair failed and we were unable to recover it. 00:37:44.893 [2024-11-10 00:11:10.987255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.987293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.987421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.987458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.987654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.987704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.987844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.987891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.987998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.988039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.988218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.988275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.988478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.988534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.988699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.988752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.988924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.988973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.989110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.989162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.989453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.989490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.989658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.989697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.989813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.989866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.990016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.990054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.990228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.990265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.990426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.990463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.990662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.990696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.990802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.990835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.991052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.991089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.991249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.991300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.991445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.991482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.991603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.991637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.991739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.991772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.991945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.991979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.992124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.992176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.992350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.992405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.992560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.992632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.992798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.992847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.993000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.993054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.993182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.993234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.993365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.993400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.993511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.993703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.993760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.993937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.993978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.994172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.994241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.994504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.994564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.994728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.994762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.994936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.995002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.995246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.995318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.995503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.995544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.995687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.995723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.995873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.995911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.996023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.996060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.996293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.996357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.996491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.996536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.996680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.996714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.996849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.996899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.997153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.997191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.997335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.997383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.997519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.997553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.997690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.997725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.997852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.997890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.998045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.998094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.998353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.998394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.998544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.998583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.998756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.998791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.998994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.999089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.999309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.999365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.999481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.999523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.999674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.999710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:10.999861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:10.999922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:11.000078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:11.000131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:11.000319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:11.000388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-11-10 00:11:11.000533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.894 [2024-11-10 00:11:11.000568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.000766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.000814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.000971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.001007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.001186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.001224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.001452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.001491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.001623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.001677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.001843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.001877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.002014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.002065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.002274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.002334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.002511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.002680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.002714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.002823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.002859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.002970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.003003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.003164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.003208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.003348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.003385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.003554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.003620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.003795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.003841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.004025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.004079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.004370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.004523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.004558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.004692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.004742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.004931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:44.895 [2024-11-10 00:11:11.004975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-11-10 00:11:11.005131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.005168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.005283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.005319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.005468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.005505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.005654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.005691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.005820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.005878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.006020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.006060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.006240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.006295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.006471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.006510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.006677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.006712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.006844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.006879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.007001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.007045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.007209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.007248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.007388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.007441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.007600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.007657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.007778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.007812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.007916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.007953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.008099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.008132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.008242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.008278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.008420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.008458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.008612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.008667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.008793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.008830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.009003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.009044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.009183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.009229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.009351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.009386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.009543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.009619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.009766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.009805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.009908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.009944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.010170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.010225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.010476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.010531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.010654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.010691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.181 [2024-11-10 00:11:11.010852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.181 [2024-11-10 00:11:11.010901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.181 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.011036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.011074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.011200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.011241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.011476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.011536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.011709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.011745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.011908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.011972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.012179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.012235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.012368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.012402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.012524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.012559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.012704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.012774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.012919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.012969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.013956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.014007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.014179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.014232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.014372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.014406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.014581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.014625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.014780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.014819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.014970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.015016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.015245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.015304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.015439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.015473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.015662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.015700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.015823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.015877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.016060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.016098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.016218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.016252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.016364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.016397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.016656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.016812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.016864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.017025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.017059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.017201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.017235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.017386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.017421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.017544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.017578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.017738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.017777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.017927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.017964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.018117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.018158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.018337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.182 [2024-11-10 00:11:11.018378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.182 qpair failed and we were unable to recover it. 00:37:45.182 [2024-11-10 00:11:11.018517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.018551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.018709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.018743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.018882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.018917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.019062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.019096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.019271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.019385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.019438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.019580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.019622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.019730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.019764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.019886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.019920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.020037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.020076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.020233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.020270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.020422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.020459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.020574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.020623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.020786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.020819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.020940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.020973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.021123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.021159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.021292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.021325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.021480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.021514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.021660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.021694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.021806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.021839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.021967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.022001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.022139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.022174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.022317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.022351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.022499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.022532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.022695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.022730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.022891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.022948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.023105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.023148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.023279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.023314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.023486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.023523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.023684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.023718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.023848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.023881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.024029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.024180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.024215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.024372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.183 [2024-11-10 00:11:11.024406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.183 qpair failed and we were unable to recover it. 00:37:45.183 [2024-11-10 00:11:11.024510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.024555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.024717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.024751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.024863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.024897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.025046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.025219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.025252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.025401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.025435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.025576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.025617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.025725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.025758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.025888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.025921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.026060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.026094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.026207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.026241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.026369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.026402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.026521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.026556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.026674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.026710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.026847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.026880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.027032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.027070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.027239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.027271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.027433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.027466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.027583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.027631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.027738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.027771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.027903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.027952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.028108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.028144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.028305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.028339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.028473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.028508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.028654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.028690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.028796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.028829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.028971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.029003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.029165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.029198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.029311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.029353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.029517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.029566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.029719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.029757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.029921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.029956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.030056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.030092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.184 [2024-11-10 00:11:11.030219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.184 [2024-11-10 00:11:11.030252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.184 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.030429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.030463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.030611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.030646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.030781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.030814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.030916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.030949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.031084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.031118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.031266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.031310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.031424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.031461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.031603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.031638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.031784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.031818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.031957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.031990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.032136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.032170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.032305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.032345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.032515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.032549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.032671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.032706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.032834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.032878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.033010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.033044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.033166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.033201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.033316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.033350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.033478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.033511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.033679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.033730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.033881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.033918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.034063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.034113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.034220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.034254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.034455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.034657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.034691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.034801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.034836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.035022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.035055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.035189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.035222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.035379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.035412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.035532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.035566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.035749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.035782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.035896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.035935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.036049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.185 [2024-11-10 00:11:11.036082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.185 qpair failed and we were unable to recover it. 00:37:45.185 [2024-11-10 00:11:11.036193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.036227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.036394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.036428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.036540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.036580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.036705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.036739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.036851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.036887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.037022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.037055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.037217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.037250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.037354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.037407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.037563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.037613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.037721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.037755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.037878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.037914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.038103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.038146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.038261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.038310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.038417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.038455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.038606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.038641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.038806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.038840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.038963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.038995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.039138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.039171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.039287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.039331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.039438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.039471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.039616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.039651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.039764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.039802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.039942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.039978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.040157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.040190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.040298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.040332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.040497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.040530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.040686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.040721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.040852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.040885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.041030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.186 [2024-11-10 00:11:11.041067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.186 qpair failed and we were unable to recover it. 00:37:45.186 [2024-11-10 00:11:11.041224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.041258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.041418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.041453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.041593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.041636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.041735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.041772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.041939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.041973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.042118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.042219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.042252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.042386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.042422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.042635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.042776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.042810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.042981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.043013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.043116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.043150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.043287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.043322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.043501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.043536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.043687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.043723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.043836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.043869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.044941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.044974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.045145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.045187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.045326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.045358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.045516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.045555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.045738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.045772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.045944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.045977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.046099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.046133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.046268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.046300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.046452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.046488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.046672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.046710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.046856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.046899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.187 [2024-11-10 00:11:11.047131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.187 [2024-11-10 00:11:11.047168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.187 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.047396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.047454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.047581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.047632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.047741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.047775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.047893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.047928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.048059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.048098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.048263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.048369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.048404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.048604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.048669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.048797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.048844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.048966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.049132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.049302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.049442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.049646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.049786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.049920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.049952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.050111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.050142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.050258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.050290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.050401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.050434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.050594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.050714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.050747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.050953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.051101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.051134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.051270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.051303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.051411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.051443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.051546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.051579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.051729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.051762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.051887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.051918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.052058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.052095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.052217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.052253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.052403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.052442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.052604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.052639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.052786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.052818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.188 [2024-11-10 00:11:11.052930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.188 [2024-11-10 00:11:11.052962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.188 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.053067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.053099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.053235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.053266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.053402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.053434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.053560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.053611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.053762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.053797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.053975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.054139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.054307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.054441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.054606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.054753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.054922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.054959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.055101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.055135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.055291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.055324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.055430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.055467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.055611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.055649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.055839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.055875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.055994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.056155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.056303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.056454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.056652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.056790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.056953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.056985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.057145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.057177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.057293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.057325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.057462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.057496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.057631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.057679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.057825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.057861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.057982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.058016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.058170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.058202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.058365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.058397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.058496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.189 [2024-11-10 00:11:11.058529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.189 qpair failed and we were unable to recover it. 00:37:45.189 [2024-11-10 00:11:11.058655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.058697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.058816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.058850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.058990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.059024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.059192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.059225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.059388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.059421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.059598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.059664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.059804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.059837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.059980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.060013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.060143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.060175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.060288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.060321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.060459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.060491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.060650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.060698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.060877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.060915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.061052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.061086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.061244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.061277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.061374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.061408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.061581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.061622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.061736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.061771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.061906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.061944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.062111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.062144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.062270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.062303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.062412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.062450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.062627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.062662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.062763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.062796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.062935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.062967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.063132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.063166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.063274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.063308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.063438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.063471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.063581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.063620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.063817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.063927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.063961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.064091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.064125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.064267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.064300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.064440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.190 [2024-11-10 00:11:11.064473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.190 qpair failed and we were unable to recover it. 00:37:45.190 [2024-11-10 00:11:11.064621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.064655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.064759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.064792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.064932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.064965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.065135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.065168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.065274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.065308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.065422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.065461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.065642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.065784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.065831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.065982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.066016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.066153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.066186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.066332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.066365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.066502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.066535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.066690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.066723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.066876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.066911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.067046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.067079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.067191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.067226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.067377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.067411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.067545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.067606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.067736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.067772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.067921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.067954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.068096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.068129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.068245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.068277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.068399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.068439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.068614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.068648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.068783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.068822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.068923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.068956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.069068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.069101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.069236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.069269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.069368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.069403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.191 qpair failed and we were unable to recover it. 00:37:45.191 [2024-11-10 00:11:11.069563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.191 [2024-11-10 00:11:11.069618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.069766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.069803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.069968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.070002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.070137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.070169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.070305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.070339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.070452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.070486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.070640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.070688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.070826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.070861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.070971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.071151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.071332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.071497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.071819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.071948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.071980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.072105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.072137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.072301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.072333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.072463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.072495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.072638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.072673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.072785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.072817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.072955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.072988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.073120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.073155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.073332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.073365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.073520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.073556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.073741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.073775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.073912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.073945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.074077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.074109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.074241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.074274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.074390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.074422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.074526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.074557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.074698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.074733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.074849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.074882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.075073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.192 [2024-11-10 00:11:11.075110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.192 qpair failed and we were unable to recover it. 00:37:45.192 [2024-11-10 00:11:11.075279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.075329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.075465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.075498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.075614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.075668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.075838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.075872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.075984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.076016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.076148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.076180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.076321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.076354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.076487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.076518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.076661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.076697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.076841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.077053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.077088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.077200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.077236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.077374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.077408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.077543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.077577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.077718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.077753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.077869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.077901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.078056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.078359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.078544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.078722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.078856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.078969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.079001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.079101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.079133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.079282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.079317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.079454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.079489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.079636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.079674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.079796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.079843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.080037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.080177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.080333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.080501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.080682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.193 [2024-11-10 00:11:11.080846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.193 qpair failed and we were unable to recover it. 00:37:45.193 [2024-11-10 00:11:11.080961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.080995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.081143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.081177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.081276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.081319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.081421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.081454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.081707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.081740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.081880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.081915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.082077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.082110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.082246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.082284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.082420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.082453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.082575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.082631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.082749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.082785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.082953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.082988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.083152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.083185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.083347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.083381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.083513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.083558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.083707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.083741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.083870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.083917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.084069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.084105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.084207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.084243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.084372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.084409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.084532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.084566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.084714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.084748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.084886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.085063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.085097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.085260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.085294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.085444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.085482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.085628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.085663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.085804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.085851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.086019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.086054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.086264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.086369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.086401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.086538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.086571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.194 qpair failed and we were unable to recover it. 00:37:45.194 [2024-11-10 00:11:11.086720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.194 [2024-11-10 00:11:11.086753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.086864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.086896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.087062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.087096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.087217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.087249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.087408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.087443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.087578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.087622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.087797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.087831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.087964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.087997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.088170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.088217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.088382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.088422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.088640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.088686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.088851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.088884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.089040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.089072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.089182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.089214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.089385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.089420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.089557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.089603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.089737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.089785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.089925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.089958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.090095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.090128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.090262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.090297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.090459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.090493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.090640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.090675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.090818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.090851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.090989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.091022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.091135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.091170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.091308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.091342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.091501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.091534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.091679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.091711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.091871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.091903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.092011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.092042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.092154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.092187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.092327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.092362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.092524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.092572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.092696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.195 [2024-11-10 00:11:11.092732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.195 qpair failed and we were unable to recover it. 00:37:45.195 [2024-11-10 00:11:11.092868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.092902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.093035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.093068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.093210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.093244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.093406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.093560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.093605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.093762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.093796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.093906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.093940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.094082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.094116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.094252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.094286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.094398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.094433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.094591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.094639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.094755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.094791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.094929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.094962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.095111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.095144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.095307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.095340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.095459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.095494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.095650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.095698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.095847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.095884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.096028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.096063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.096225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.096258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.096363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.096397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.096558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.096622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.096731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.096766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.096915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.096949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.097079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.097113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.097247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.097283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.097427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.097461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.097603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.097637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.097757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.097805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.097979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.196 [2024-11-10 00:11:11.098014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.196 qpair failed and we were unable to recover it. 00:37:45.196 [2024-11-10 00:11:11.098152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.098184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.098323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.098355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.098466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.098498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.098628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.098677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.098855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.098892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.099039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.099074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.099206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.099240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.099404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.099438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.099590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.099625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.099727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.099760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.099868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.099905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.100048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.100082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.100208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.100241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.100381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.100414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.100611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.100645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.100779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.100814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.100947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.100980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.101092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.101123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.101258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.101292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.101398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.101432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.101599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.101635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.101770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.101805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.101942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.101976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.102137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.102171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.102311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.102345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.102477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.102512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.102677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.102711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.102864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.102912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.103084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.103117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.103228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.103262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.103392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.103426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.103531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.103570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.103748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.103782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.103885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.197 [2024-11-10 00:11:11.103919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.197 qpair failed and we were unable to recover it. 00:37:45.197 [2024-11-10 00:11:11.104078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.104117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.104256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.104289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.104471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.104508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.104652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.104699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.104848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.104883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.105958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.105993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.106131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.106164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.106315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.106348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.106506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.106671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.106707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.106850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.106884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.107024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.107058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.107173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.107207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.107374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.107408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.107564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.107625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.107737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.107775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.107892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.107926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.108051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.108084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.108197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.108230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.108332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.108365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.108532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.108565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.108733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.108780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.108952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.108987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.109150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.109183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.109348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.109382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.109525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.109558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.109690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.109738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.198 [2024-11-10 00:11:11.109881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.198 [2024-11-10 00:11:11.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.198 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.110051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.110084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.110256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.110289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.110392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.110424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.110562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.110600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.110715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.110747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.110849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.110882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.111018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.111050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.111178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.111209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.111321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.111360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.111514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.111550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.111699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.111735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.111848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.111882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.112019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.112053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.112163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.112196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.112302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.112337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.112502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.112652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.112700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.112880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.112914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.113026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.113061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.113191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.113226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.113372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.113409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.113584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.113628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.113787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.113821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.113957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.113989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.114129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.114297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.114330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.114464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.114496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.114634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.114667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.114830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.114865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.114984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.115031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.115184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.115226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.115336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.115370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.115512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.199 [2024-11-10 00:11:11.115545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.199 qpair failed and we were unable to recover it. 00:37:45.199 [2024-11-10 00:11:11.115654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.115688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.115819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.115852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.115981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.116014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.116219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.116322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.116356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.116524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.116559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.116707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.116741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.116878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.116912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.117036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.117070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.117208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.117242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.117351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.117384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.117539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.117595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.117748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.117784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.117903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.117935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.118044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.118077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.118176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.118208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.118347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.118382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.118517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.118570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.118747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.118787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.119041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.119099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.119229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.119262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.119426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.119457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.119601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.119634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.119765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.119798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.119978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.120010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.120147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.120179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.120331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.120379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.120529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.120565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.120745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.120781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.120943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.120977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.121143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.121176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.121310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.121342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.121501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.121532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.121677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.200 [2024-11-10 00:11:11.121710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.200 qpair failed and we were unable to recover it. 00:37:45.200 [2024-11-10 00:11:11.121865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.121897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.121998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.122029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.122129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.122161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.122359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.122411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.122539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.122596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.122726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.122762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.122904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.122938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.123085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.123118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.123264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.123299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.123484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.123522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.123704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.123747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.123873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.123927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.124161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.124225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.124408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.124446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.124601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.124646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.124778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.124812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.124938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.124971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.125144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.125178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.125318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.125350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.125459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.125491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.125626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.125660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.125774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.125808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.125928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.125976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.126163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.126199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.126363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.126395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.126498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.126530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.126670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.126705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.126856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.126904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.127049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.127084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.127198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.201 [2024-11-10 00:11:11.127231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.201 qpair failed and we were unable to recover it. 00:37:45.201 [2024-11-10 00:11:11.127415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.127448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.127558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.127604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.127739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.127771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.127901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.127933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.128091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.128123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.128236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.128268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.128400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.128576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.128632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.128752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.128789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.128920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.128955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.129081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.129114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.129245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.129279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.129415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.129448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.129610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.129650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.129771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.129805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.129937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.129970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.130129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.130162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.130331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.130364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.130492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.130529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.130665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.130700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.130830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.130864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.131023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.131057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.131199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.131233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.131332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.131365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.131476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.131511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.131636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.131683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.131827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.131861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.132002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.132034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.132146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.132180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.132319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.132352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.132491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.132528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.132673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.132707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.132878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.132912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.202 qpair failed and we were unable to recover it. 00:37:45.202 [2024-11-10 00:11:11.133048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.202 [2024-11-10 00:11:11.133080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.133215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.133249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.133412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.133446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.133571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.133627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.133740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.133775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.133910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.133944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.134046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.134079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.134215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.134248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.134378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.134411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.134579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.134708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.134756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.134898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.134932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.135070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.135103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.135200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.135233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.135397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.135430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.135546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.135582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.135758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.135793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.135929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.135962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.136123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.136156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.136258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.136291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.136446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.136489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.136636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.136685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.136840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.136890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.137039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.137076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.137183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.137218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.137387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.137421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.137556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.137596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.137701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.137735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.137886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.137933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.138078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.138113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.138228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.138263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.138395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.138428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.138544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.138579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.138722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.203 [2024-11-10 00:11:11.138756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.203 qpair failed and we were unable to recover it. 00:37:45.203 [2024-11-10 00:11:11.138921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.138954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.139090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.139124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.139290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.139325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.139482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.139520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.139680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.139714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.139845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.139893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.140024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.140059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.140192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.140226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.140393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.140426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.140580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.140641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.140756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.140789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.140955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.140990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.141104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.141137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.141277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.141311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.141417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.141451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.141592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.141626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.141753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.141786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.141940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.141974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.142083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.142117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.142251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.142283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.142415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.142447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.142603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.142652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.142798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.142833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.142998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.143032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.143149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.143184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.143323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.143356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.143500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.143540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.143727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.143775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.143947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.143981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.144083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.144116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.144253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.144286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.144429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.144461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.144571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.144612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.204 [2024-11-10 00:11:11.144759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.204 [2024-11-10 00:11:11.144806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.204 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.144968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.145004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.145146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.145180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.145309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.145342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.145499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.145536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.145690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.145739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.145884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.145920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.146061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.146093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.146202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.146234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.146370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.146403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.146511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.146544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.146674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.146710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.146835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.146876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.147099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.147138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.147314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.147351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.147500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.147537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.147751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.147788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.148086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.148205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.148255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.148396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.148433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.148608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.148644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.148798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.148845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.148992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.149027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.149191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.149224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.149359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.149392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.149503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.149536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.149688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.149722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.149880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.149913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.150028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.150062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.150159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.150192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.150345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.150575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.150618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.150759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.150793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.150915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.150957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.205 [2024-11-10 00:11:11.151126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.205 [2024-11-10 00:11:11.151160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.205 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.151271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.151304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.151437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.151471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.151614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.151649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.151775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.151808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.151906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.151940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.152079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.152112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.152246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.152278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.152440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.152473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.152611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.152645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.152761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.152794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.152932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.152965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.153121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.153154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.153311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.153344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.153500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.153532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.153649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.153683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.153781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.153814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.153948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.153980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.154095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.154128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.154297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.154329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.154482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.154685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.154719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.154834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.154867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.154977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.155009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.155142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.155175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.155311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.155344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.155450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.155501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.155681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.155715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.155892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.155928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.206 qpair failed and we were unable to recover it. 00:37:45.206 [2024-11-10 00:11:11.156099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.206 [2024-11-10 00:11:11.156135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.156335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.156372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.156619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.156668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.156827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.156874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.157025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.157061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.157199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.157234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.157380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.157413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.157577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.157616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.157734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.157767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.157922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.158060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.158098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.158232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.158265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.158421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.158455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.158560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.158599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.158717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.158752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.158917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.158950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.159102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.159136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.159272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.159305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.159465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.159498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.159695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.159729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.159838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.159870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.159974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.160007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.160173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.160206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.160354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.160399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.160542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.160575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.160761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.160922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.160955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.161087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.161120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.161256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.161289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.161425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.161459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.161577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.161671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.161819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.161854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.161964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.161999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.162166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.207 [2024-11-10 00:11:11.162201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.207 qpair failed and we were unable to recover it. 00:37:45.207 [2024-11-10 00:11:11.162367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.162402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.162540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.162575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.162709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.162757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.162881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.162916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.163054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.163087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.163192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.163225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.163362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.163395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.163496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.163529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.163720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.163755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.163916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.163949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.164080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.164113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.164272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.164305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.164454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.164491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.164681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.164731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.164864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.164912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.165062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.165097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.165204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.165242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.165377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.165410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.165539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.165572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.165693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.165725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.165853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.165885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.166027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.166061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.166190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.166223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.166328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.166360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.166467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.166504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.166680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.166728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.166860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.166908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.167087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.167123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.167261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.167296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.167447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.167485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.167653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.167689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.167824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.167858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.167992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.168024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.168128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.208 [2024-11-10 00:11:11.168161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.208 qpair failed and we were unable to recover it. 00:37:45.208 [2024-11-10 00:11:11.168306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.168340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.168454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.168492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.168669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.168705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.168846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.168879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.169014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.169047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.169187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.169220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.169353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.169386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.169532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.169766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.169803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.169909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.169944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.170071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.170105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.170240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.170273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.170420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.170453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.170604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.170652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.170828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.170864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.170972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.171005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.171119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.171152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.171260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.171293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.171426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.171459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.171597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.171632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.171799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.171832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.171967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.172000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.172136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.172173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.172310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.172343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.172511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.172545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.172701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.172749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.172872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.172907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.173060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.173108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.173256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.173289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.173397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.173430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.173570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.173615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.173713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.173747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.173895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.173927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.209 qpair failed and we were unable to recover it. 00:37:45.209 [2024-11-10 00:11:11.174087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.209 [2024-11-10 00:11:11.174121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.174262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.174296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.174461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.174493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.174603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.174638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.174816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.174864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.175015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.175051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.175212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.175246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.175346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.175380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.175520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.175554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.175705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.175740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.175872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.175918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.176074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.176109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.176245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.176279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.176441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.176474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.176600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.176634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.176747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.176782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.176948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.176982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.177130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.177163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.177291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.177324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.177477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.177514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.177676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.177710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.177823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.177857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.177991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.178026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.178169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.178203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.178405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.178536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.178570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.178688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.178722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.178895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.178930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.179035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.179086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.179288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.179330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.179486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.179522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.179670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.179740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.179922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.179961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.180236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.210 [2024-11-10 00:11:11.180292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.210 qpair failed and we were unable to recover it. 00:37:45.210 [2024-11-10 00:11:11.180458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.180495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.180645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.180678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.180827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.180863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.181010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.181145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.181179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.181324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.181357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.181494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.181528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.181634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.181667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.181801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.181833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.181997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.182030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.182204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.182237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.182400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.182432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.182602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.182650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.182825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.182862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.183051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.183099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.183246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.183283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.183450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.183484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.183613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.183647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.183752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.183785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.183898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.183933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.184035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.184069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.184179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.184214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.184321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.184355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.184466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.184504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.184637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.184685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.184841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.184877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.185002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.185035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.185131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.185164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.185330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.185364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.185468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.185501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.211 qpair failed and we were unable to recover it. 00:37:45.211 [2024-11-10 00:11:11.185643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.211 [2024-11-10 00:11:11.185679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.185857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.186021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.186055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.186188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.186222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.186357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.186392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.186530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.186570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.186715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.186749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.186859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.186893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.187025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.187058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.187189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.187222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.187356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.187388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.187529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.187564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.187722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.187769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.187894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.187930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.188038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.188071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.188211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.188245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.188364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.188398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.188503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.188537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.188697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.188745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.188903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.188941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.189042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.189076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.189217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.189252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.189405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.189443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.189656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.189704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.189853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.190036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.190070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.190179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.190211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.190344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.190377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.190489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.190523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.190677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.190712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.190874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.190922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.191041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.191076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.191237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.191272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.191407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.212 [2024-11-10 00:11:11.191454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.212 qpair failed and we were unable to recover it. 00:37:45.212 [2024-11-10 00:11:11.191618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.191652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.191750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.191783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.191931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.191967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.192133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.192167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.192295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.192328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.192438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.192472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.192613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.192657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.192805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.192853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.193004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.193038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.193207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.193241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.193393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.193425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.193584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.193647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.193799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.193836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.193977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.194012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.194148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.194182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.194295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.194330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.194473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.194509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.194647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.194695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.194850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.194886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.194991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.195025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.195156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.195189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.195332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.195365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.195523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.195558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.195711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.195758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.195883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.195919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.196146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.196205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.196471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.196528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.196737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.196774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.196892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.196946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.197107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.197144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.197359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.197428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.197598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.197632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.197766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.197800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.213 [2024-11-10 00:11:11.197963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.213 [2024-11-10 00:11:11.197996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.213 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.198130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.198163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.198293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.198326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.198445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.198480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.198630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.198664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.198780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.198827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.199007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.199042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.199167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.199202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.199340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.199373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.199473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.199506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.199668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.199702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.199844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.199878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.200043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.200076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.200231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.200263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.200393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.200426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.200556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.200594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.200700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.200734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.200883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.200916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.201063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.201208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.201336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.201513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.201682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.201825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.201991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.202154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.202288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.202458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.202613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.202778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.202923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.202955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.203088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.203121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.203257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.203290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.203436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.214 [2024-11-10 00:11:11.203469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.214 qpair failed and we were unable to recover it. 00:37:45.214 [2024-11-10 00:11:11.203579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.203624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.203731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.203765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.203925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.203958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.204122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.204155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.204311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.204344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.204484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.204517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.204662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.204696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.204825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.204857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.204963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.204996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.205102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.205135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.205289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.205337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.205506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.205547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.205689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.205724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.205855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.206055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.206088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.206195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.206229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.206335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.206369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.206484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.206518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.206682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.206730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.206871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.206906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.207044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.207078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.207183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.207218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.207379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.207411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.207521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.207701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.207736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.207879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.207914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.208079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.208113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.208283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.208317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.208472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.208510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.208674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.208707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.208865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.208898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.209006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.209039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.209148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.209180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.209315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.215 [2024-11-10 00:11:11.209348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.215 qpair failed and we were unable to recover it. 00:37:45.215 [2024-11-10 00:11:11.209460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.209495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.209609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.209643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.209808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.209930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.209966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.210111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.210156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.210291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.210325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.210460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.210495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.210668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.210702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.210867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.210914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.211080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.211114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.211259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.211293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.211422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.211455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.211596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.211631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.211735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.211769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.211938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.211972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.212077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.212110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.212248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.212280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.212416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.212454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.212591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.212624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.212763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.212796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.212985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.213151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.213184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.213348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.213382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.213479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.213513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.213663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.213712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.213861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.213897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.214026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.214060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.214178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.214214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.214378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.214413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.214549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.214582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.214706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.216 [2024-11-10 00:11:11.214740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.216 qpair failed and we were unable to recover it. 00:37:45.216 [2024-11-10 00:11:11.214866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.214901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.215032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.215067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.215207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.215240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.215345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.215379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.215512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.215545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.215681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.215728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.215872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.215906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.216066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.216099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.216286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.216424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.216456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.216602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.216639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.216873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.217020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.217056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.217199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.217232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.217364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.217397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.217526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.217558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.217700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.217734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.217868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.217901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.218072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.218233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.218359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.218527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.218707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.218879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.218981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.219014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.219120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.219154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.219287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.219325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.219486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.219519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.219678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.219712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.219813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.219866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.220068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.220104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.220260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.220294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.220414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.217 [2024-11-10 00:11:11.220462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.217 qpair failed and we were unable to recover it. 00:37:45.217 [2024-11-10 00:11:11.220583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.220628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.220771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.220806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.220956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.220991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.221130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.221164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.221294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.221327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.221523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.221664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.221698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.221808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.221843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.221978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.222117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.222282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.222489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.222647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.222798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.222965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.222997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.223133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.223166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.223315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.223352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.223494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.223534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.223719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.223759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.223918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.223971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.224233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.224292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.224458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.224495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.224648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.224682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.224816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.224849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.225011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.225044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.225176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.225209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.225315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.225348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.225455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.225488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.225650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.225683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.225812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.225845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.226007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.226041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.226219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.226267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.226438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.226474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.218 [2024-11-10 00:11:11.226622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.218 [2024-11-10 00:11:11.226663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.218 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.226800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.226834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.226993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.227027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.227151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.227186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.227347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.227381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.227485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.227517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.227685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.227719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.227852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.227885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.228017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.228050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.228190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.228223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.228390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.228422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.228580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.228642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.228788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.228822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.228962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.229158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.229190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.229358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.229497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.229530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.229700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.229749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.229924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.229958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.230125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.230158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.230292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.230325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.230465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.230498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.230639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.230673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.230778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.230812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.230977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.231011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.231143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.231176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.231337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.231370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.231490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.231688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.231722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.231840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.231875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.231997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.232030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.232172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.232205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.232299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.232331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.232437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.219 [2024-11-10 00:11:11.232470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.219 qpair failed and we were unable to recover it. 00:37:45.219 [2024-11-10 00:11:11.232622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.232674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.232814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.232849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.233017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.233050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.233184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.233216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.233376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.233409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.233542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.233575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.233699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.233738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.233877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.233914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.234083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.234118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.234244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.234277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.234420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.234455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.234596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.234638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.234753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.234787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.234949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.234982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.235145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.235178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.235314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.235347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.235483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.235517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.235689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.235724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.235858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.235891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.236052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.236086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.236233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.236267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.236426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.236463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.236582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.236623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.236782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.236816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.236951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.236984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.237119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.237153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.237292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.237325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.237472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.237507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.237672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.237721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.237836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.237871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.238031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.238064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.238231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.238265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.238396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.238429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.220 [2024-11-10 00:11:11.238550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.220 [2024-11-10 00:11:11.238585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.220 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.238768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.238801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.238938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.238978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.239113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.239146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.239278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.239311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.239486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.239520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.239640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.239675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.239838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.239871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.240078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.240135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.240305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.240342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.240510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.240547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.240699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.240754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.240980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.241033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.241212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.241323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.241546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.241722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.241761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.241937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.242074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.242111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.242233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.242285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.242451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.242488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.242712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.242897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.242941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.243047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.243081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.243266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.243312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.243438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.243487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.243632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.243681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.243852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.243891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.244128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.244189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.244302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.244339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.244515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.244554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.244729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.244767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.244919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.244957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.245166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.245221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.245371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.245406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.221 qpair failed and we were unable to recover it. 00:37:45.221 [2024-11-10 00:11:11.245532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.221 [2024-11-10 00:11:11.245581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.245763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.245805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.245971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.246034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.246196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.246248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.246400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.246434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.246572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.246617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.246754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.246788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.246902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.246953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.247103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.247141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.247325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.247363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.247503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.247541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.247701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.247740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.247948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.247988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.248132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.248171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.248320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.248354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.248547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.248604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.248769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.248825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.248966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.249033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.249228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.249270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.249424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.249464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.249671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.249728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.249872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.249923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.250206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.250266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.250398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.250433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.250593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.250631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.250800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.250839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.250974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.251012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.251164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.251202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.251333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.251367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.222 [2024-11-10 00:11:11.251500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.222 [2024-11-10 00:11:11.251533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.222 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.251702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.251738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.251903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.251950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.252124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.252165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.252341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.252390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.252607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.252668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.252826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.252889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.253081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.253139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.253344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.253524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.253559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.253736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.253774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.254032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.254093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.254248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.254300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.254444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.254478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.254618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.254679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.254828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.254866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.255078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.255113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.255373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.255517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.255552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.255731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.255774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.255951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.255989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.256199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.256414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.256452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.256601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.256656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.256826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.256879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.257146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.257188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.257384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.257420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.257551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.257615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.257756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.257809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.257938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.257992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.258148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.258202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.258348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.258384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.258491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.258525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.258642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.258678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.258818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.258851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.258999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.223 [2024-11-10 00:11:11.259036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.223 qpair failed and we were unable to recover it. 00:37:45.223 [2024-11-10 00:11:11.259187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.259223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.259359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.259393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.259509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.259543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.259719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.259753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.259881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.259929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.260072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.260108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.260253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.260291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.260426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.260461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.260647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.260691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.260863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.260913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.261083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.261124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.261270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.261305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.261448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.261481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.261596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.261631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.261761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.261947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.261995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.262144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.262181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.262316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.262351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.262502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.262537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.262700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.262740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.262887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.262927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.263086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.263123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.263266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.263301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.263462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.263510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.263672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.263827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.263876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.264036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.264072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.264227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.264268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.264423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.264462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.264615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.264836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.264876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.265040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.265073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.265241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.265275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.265418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.265452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.265597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.265828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.265871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.224 [2024-11-10 00:11:11.266027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.224 [2024-11-10 00:11:11.266068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.224 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.266237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.266272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.266381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.266416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.266560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.266600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.266736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.266788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.266915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.266949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.267109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.267143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.267280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.267317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.267503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.267538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.267730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.267771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.267956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.268009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.268119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.268153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.268327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.268362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.268556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.268733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.268786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.268943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.268985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.269206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.269262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.269422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.269457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.269569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.269612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.269776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.269810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.269913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.269946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.270079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.270112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.270253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.270288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.270473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.270510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.270661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.270715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.270885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.270926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.271150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.271208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.271335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.271371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.271479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.271513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.271648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.271682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.271787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.271821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.271999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.272189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.272228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.272402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.272439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.272606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.272641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.272755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.272790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.272949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.225 [2024-11-10 00:11:11.272987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.225 qpair failed and we were unable to recover it. 00:37:45.225 [2024-11-10 00:11:11.273149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.273186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.273298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.273336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.273480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.273518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.273699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.273740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.273960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.274014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.274189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.274225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.274364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.274398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.274546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.274583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.274726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.274766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.274935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.274988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.275127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.275164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.275306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.275341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.275482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.275515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.275750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.275801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.275966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.276011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.276158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.276192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.276328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.276437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.276472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.276616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.276659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.276799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.276978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.277012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.277127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.277164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.277309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.277344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.277484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.277533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.277696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.277740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.277867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.277910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.278089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.278143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.278287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.278321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.281722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.281773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.281929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.281977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.282205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.282403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.282456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.282605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.226 [2024-11-10 00:11:11.282670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.226 qpair failed and we were unable to recover it. 00:37:45.226 [2024-11-10 00:11:11.282813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.282865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.283120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.283196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.283344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.283380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.283540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.283575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.283742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.283932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.283985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.284097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.284133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.284267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.284310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.284461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.284501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.284643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.284678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.284863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.284919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.285173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.285305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.285340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.285544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.285580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.285747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.285800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.285966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.286000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.286154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.286206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.286426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.286459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.286606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.286641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.286789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.286825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.286998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.287041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.287227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.287262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.287418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.287454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.287608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.287663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.287806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.287859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.288001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.288042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.288244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.288280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.288413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.288448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.288563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.288605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.288802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.288856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.289010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.289064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.289232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.289272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.289422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.289462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.289609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.289645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.289829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.289869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.290092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.290132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.227 [2024-11-10 00:11:11.290324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.227 [2024-11-10 00:11:11.290360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.227 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.290481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.290515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.290682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.290735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.290882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.290934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.291065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.291126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.291255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.291289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.291450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.291483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.291620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.291654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.291767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.291800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.291905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.291941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.292105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.292139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.292252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.292289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.292390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.292430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.292553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.292612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.292787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.292830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.292995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.293051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.293191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.293225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.293360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.293394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.293489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.293523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.293703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.293741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.293904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.293938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.294043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.294078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.294265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.294303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.294413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.294446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.294603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.294807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.294847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.295032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.295075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.295221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.295256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.295382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.295416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.295532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.295566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.295769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.295821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.295954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.295996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.296150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.296188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.296377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.296410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.296548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.296581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.296740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.296773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.296932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.296970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.297178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.297243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.297444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.297485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.297677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.228 [2024-11-10 00:11:11.297725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.228 qpair failed and we were unable to recover it. 00:37:45.228 [2024-11-10 00:11:11.297941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.297996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.298103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.298137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.298278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.298312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.298479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.298513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.298653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.298703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.298852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.298900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.299084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.299121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.299344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.299380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.299541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.299581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.299757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.299807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.299967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.300179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.300248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.300434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.300473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.300617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.300659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.300829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.300873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.301038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.301080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.301251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.301305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.301470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.301506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.301671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.301708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.301877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.301918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.302130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.302170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.302334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.302370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.302476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.302511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.302669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.302708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.302879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.302930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.303186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.303228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.303390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.303424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.303579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.303640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.303790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.303827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.304091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.304148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.304333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.304375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.304569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.304620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.304824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.304877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.305095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.305153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.305319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.305356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.305520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.305553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.305721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.305773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.305894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.305947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.306078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.306111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.229 [2024-11-10 00:11:11.306242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.229 [2024-11-10 00:11:11.306295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.229 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.306438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.306474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.306634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.306671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.306783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.306819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.306995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.307049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.307198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.307268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.307379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.307412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.307528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.307566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.307746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.307787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.307938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.307979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.308172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.308211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.308403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.308438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.308576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.308616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.308770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.308806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.308971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.309007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.309184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.309282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.309471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.309505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.309670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.309704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.309863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.309903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.310039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.310090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.310211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.310245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.310406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.310442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.310593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.310655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.310842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.310879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.311043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.311079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.311285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.311330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.311485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.311522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.311683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.311738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.311901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.311942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.312159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.312241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.312381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.312420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.312562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.312605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.312776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.312813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.313073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.313130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.313328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.313395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.313550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.313583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.313734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.313767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.313907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.314077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.314110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.230 [2024-11-10 00:11:11.314234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.230 [2024-11-10 00:11:11.314270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.230 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.314450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.314493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.314621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.314655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.314780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.314818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.314968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.315004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.315174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.315211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.315422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.315537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.315574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.315732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.315775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.315958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.316011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.316156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.316201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.316386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.316436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.316604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.316643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.316762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.316798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.316932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.316967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.317112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.317147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.317312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.317346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.317524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.317658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.317698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.317820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.317860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.318004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.318041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.318220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.318256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.318373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.318409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.318521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.318572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.318713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.318750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.318895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.318932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.319095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.319133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.319264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.319300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.319426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.319476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.319654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.319708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.319872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.319911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.320101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.320161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.320304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.320338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.320476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.320521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.231 [2024-11-10 00:11:11.320681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.231 [2024-11-10 00:11:11.320719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.231 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.320847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.320884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.321089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.321158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.321331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.321369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.321520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.321557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.321704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.321742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.321870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.321923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.322124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.322174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.322354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.322393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.322547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.322581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.322752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.322789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.322953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.322990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.323110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.323146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.323331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.323365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.323504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.323537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.323670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.323718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.323866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.323903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.324068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.324250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.324285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.324427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.324460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.324600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.324635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.324769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.324807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.324948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.324985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.325163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.325200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.325321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.325355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.325485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.325518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.325684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.325728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.325906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.325946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.326123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.326180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.326339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.326372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.326506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.326540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.232 [2024-11-10 00:11:11.326760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.232 [2024-11-10 00:11:11.326816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.232 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.327013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.327149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.327186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.327398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.327547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.327585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.327773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.327817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.327977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.328031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.328169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.328222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.328355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.328389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.328524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.328558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.328734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.328788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.328908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.328946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.329062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.329098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.329263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.329301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.329477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.329513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.329694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.329732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.329938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.329997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.330156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.330256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.330387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.330420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.330559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.330600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.330738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.330800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.331086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.331259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.331406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.331566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.331710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.331868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.331994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.332028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.332211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.332255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.332414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.332449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.332647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.332687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.332834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.332895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.333089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.333140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.333314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.333354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.333472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.333506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.233 [2024-11-10 00:11:11.333669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.233 [2024-11-10 00:11:11.333722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.233 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.333884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.333943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.334098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.334135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.334286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.334321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.334461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.334502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.334668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.334710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.334919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.334974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.335122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.335157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.335292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.335329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.335438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.335473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.335639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.335675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.335815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.335849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.335996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.336029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.336128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.336161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.336286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.336330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.336451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.336499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.336639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.336685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.336845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.336898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.337061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.337113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.337262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.337296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.337440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.337477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.337617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.337669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.337791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.337840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.337987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.338023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.338215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.338366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.338400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.338500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.338532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.338660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.338694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.338818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.338853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.338999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.339034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.339151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.339187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.339349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.339383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.339565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.339706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.339760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.339946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.340012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.340142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.340181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.340329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.340364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.340506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.340539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.340734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.340782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.340973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.341215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.341255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.341448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.234 [2024-11-10 00:11:11.341486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.234 qpair failed and we were unable to recover it. 00:37:45.234 [2024-11-10 00:11:11.341645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.341683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.341863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.341902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.342133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.342170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.342302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.342337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.342436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.342469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.342605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.342640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.342827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.342876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.343021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.343056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.343174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.343210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.343389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.343425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.343579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.343657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.343813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.343852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.344031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.344168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.344204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.344380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.344530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.344564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.344732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.344786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.345007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.345045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.345193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.345248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.345390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.345429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.345564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.345604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.345744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.345779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.345909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.345944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.346067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.346101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.346239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.346271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.346432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.346464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.346581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.346620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.346722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.346757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.346926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.346979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.347175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.347228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.347394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.347439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.347591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.347625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.347805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.347859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.348037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.348075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.348240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.348292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.348428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.348461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.348569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.348609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.348782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.348820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.348968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.349005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.349186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.349222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.235 [2024-11-10 00:11:11.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.235 [2024-11-10 00:11:11.349401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.235 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.349575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.349630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.349806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.349843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.350063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.350101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.350211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.350247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.350402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.350438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.350606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.350641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.350778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.350816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.350968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.351044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.351259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.351324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.351472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.351508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.351681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.351734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.351916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.351960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.352212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.352264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.352432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.352467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.352596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.352661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.352814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.352867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.353034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.353067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.353201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.353234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.353377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.353421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.353575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.353655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.353846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.353908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.236 [2024-11-10 00:11:11.354038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.236 [2024-11-10 00:11:11.354075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.236 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.354248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.354315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.354483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.354521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.354776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.354816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.355000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.355056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.355203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.355238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.355382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.355431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.355582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.355625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.355788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.355827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.356097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.356162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.356328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.356363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.356526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.356571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.356771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.356813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.357006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.357045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.357176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.357217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.357366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.357403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.357539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.357577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.357741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.357796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.521 [2024-11-10 00:11:11.358000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.521 [2024-11-10 00:11:11.358041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.521 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.358171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.358208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.358371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.358422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.358579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.358622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.358756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.358789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.358900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.358954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.359207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.359245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.359395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.359433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.359608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.359663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.359828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.359964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.360008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.360226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.360294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.360438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.360486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.360668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.360706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.360909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.360962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.361246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.361310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.361517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.361553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.361706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.361744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.361902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.361947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.362164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.362207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.362351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.362384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.362518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.362551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.362743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.362780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.362919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.362956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.363163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.363219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.363407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.363439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.363541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.363574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.363747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.363784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.364014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.364068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.364249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.364288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.364456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.364491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.364637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.364676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.364829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.364868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.365058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.365106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.365272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.365307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.522 qpair failed and we were unable to recover it. 00:37:45.522 [2024-11-10 00:11:11.365441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.522 [2024-11-10 00:11:11.365475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.365651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.365685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.365783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.365817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.365964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.366002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.366175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.366209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.366318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.366385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.366498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.366531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.366650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.366684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.366818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.366851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.367065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.367101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.367229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.367267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.367409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.367447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.367608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.367641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.367779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.367812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.367960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.367997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.368167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.368203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.368311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.368347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.368460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.368497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.368676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.368710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.368864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.368901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.369035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.369073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.369284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.369321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.369445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.369481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.369583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.369656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.369816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.369855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.369985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.370023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.370203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.370380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.370416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.370533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.370569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.370732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.370781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.370919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.370967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.371136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.371189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.371312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.371365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.371505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.371540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.523 [2024-11-10 00:11:11.371647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.523 [2024-11-10 00:11:11.371682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.523 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.371839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.371877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.372027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.372065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.372237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.372274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.372426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.372463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.372613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.372648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.372785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.372818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.372942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.372980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.373132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.373169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.373342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.373378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.373573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.373631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.373770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.373806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.373942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.373995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.374154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.374193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.374335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.374389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.374537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.374576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.374739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.374773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.374922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.374961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.375079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.375116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.375244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.375296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.375405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.375441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.375576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.375637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.375752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.375807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.375999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.376038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.376190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.376228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.376380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.376417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.376567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.376618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.376754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.376788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.376952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.376990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.377120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.377173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.377322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.377364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.377525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.377559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.377720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.377769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.377914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.524 [2024-11-10 00:11:11.377950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.524 qpair failed and we were unable to recover it. 00:37:45.524 [2024-11-10 00:11:11.378145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.378185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.378306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.378345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.378458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.378499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.378670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.378843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.378878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.379041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.379074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.379219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.379255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.379396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.379431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.379546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.379579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.379725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.379759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.379896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.379935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.380125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.380167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.380320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.380359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.380492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.380528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.380682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.380735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.380924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.381071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.381124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.381262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.381295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.381415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.381450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.381599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.381651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.381801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.381974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.382011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.382151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.382221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.382360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.382397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.382519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.382553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.382741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.382789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.382988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.383052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.383251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.383308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.383662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.383797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.383831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.384048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.384084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.384266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.384303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.384405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.384442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.525 qpair failed and we were unable to recover it. 00:37:45.525 [2024-11-10 00:11:11.384583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.525 [2024-11-10 00:11:11.384639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.384771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.384808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.384936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.384995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.385144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.385195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.385341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.385392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.385545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.385601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.385763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.385812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.385930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.385966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.386109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.386145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.386285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.386443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.386476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.386618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.386653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.386814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.386847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.386954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.386988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.387143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.387191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.387318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.387509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.387542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.387676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.387712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.387852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.387896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.388011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.388044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.388162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.388258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.388435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.388472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.388659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.388693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.388854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.388908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.389059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.389110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.389219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.389253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.389387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.389420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.389549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.389582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.389714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.389766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.389884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.389923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.526 [2024-11-10 00:11:11.390040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.526 [2024-11-10 00:11:11.390078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.526 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.390221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.390258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.390393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.390425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.390556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.390596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.390724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.390758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.390885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.390935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.391100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.391136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.391310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.391347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.391529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.391562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.391709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.391742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.391850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.391906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.392081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.392117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.392277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.392334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.392481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.392517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.392667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.392702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.392859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.392907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.393078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.393132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.393393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.393447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.393601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.393652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.393784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.393816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.393993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.394053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.394195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.394246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.394393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.394430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.394584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.394625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.394739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.394771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.394928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.394965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.395095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.395147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.395319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.395358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.395507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.395543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.395701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.395734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.395834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.395883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.396003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.396039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.396165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.396215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.396368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.396406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.396550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.396597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.527 [2024-11-10 00:11:11.396780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.527 [2024-11-10 00:11:11.396812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.527 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.396964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.397000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.397148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.397185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.397333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.397370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.397528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.397566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.397749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.397783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.397892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.397924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.398069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.398120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.398269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.398305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.398438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.398612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.398646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.398774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.398807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.398933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.398971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.399089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.399126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.399237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.399274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.399387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.399437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.399614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.399664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.399789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.399827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.399982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.400033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.400176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.400211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.400328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.400365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.400568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.400636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.400758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.400793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.400932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.400966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.401132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.401169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.401331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.401367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.401515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.401552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.401717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.401752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.401882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.401915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.402065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.402101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.402264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.402296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.402517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.402553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.402686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.402720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.402859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.402913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.403112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.403149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.528 [2024-11-10 00:11:11.403305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.528 [2024-11-10 00:11:11.403342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.528 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.403466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.403509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.403632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.403666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.403833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.403883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.404029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.404065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.404188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.404239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.404380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.404417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.404573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.404656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.404821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.404857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.404979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.405038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.405190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.405226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.405400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.405435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.405602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.405636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.405744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.405778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.405885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.405918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.406055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.406088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.406213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.406249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.406366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.406402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.406549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.406595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.406750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.406783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.406928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.406975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.407146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.407199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.407328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.407366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.407528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.407561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.407698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.407731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.407831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.407882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.408031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.408074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.408280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.408421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.408456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.408609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.408649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.408776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.408808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.408963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.409017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.409188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.409222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.409363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.409396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.409534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.529 [2024-11-10 00:11:11.409718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.529 [2024-11-10 00:11:11.409753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.529 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.409900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.409933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.410038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.410071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.410247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.410280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.410391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.410435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.410559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.410623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.410775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.410810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.410962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.410996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.411148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.411246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.411279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.411377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.411411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.411531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.411579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.411766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.411803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.411967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.412003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.412186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.412401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.412483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.412649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.412682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.412820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.412852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.413039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.413076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.413249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.413285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.413458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.413493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.413630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.413696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.413840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.413893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.414071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.414110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.414221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.414258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.414406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.414443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.414597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.414631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.414775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.414808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.414945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.414982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.415233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.415271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.415393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.415428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.415576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.415635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.415747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.415910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.415942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.416060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.416095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.416227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.416278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.416402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.416442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.416603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.530 [2024-11-10 00:11:11.416660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.530 qpair failed and we were unable to recover it. 00:37:45.530 [2024-11-10 00:11:11.416778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.416810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.416960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.416998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.417143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.417181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.417384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.417450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.417620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.417654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.417789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.417821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.417979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.418015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.418163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.418198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.418340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.418376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.418490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.418525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.418663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.418695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.418887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.418923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.419069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.419105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.419223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.419272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.419420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.419455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.419622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.419688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.419828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.419875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.420030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.420069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.420233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.420271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.420447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.420495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.420642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.420679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.420797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.420835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.421023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.421057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.421214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.421265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.421406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.421440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.421577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.421615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.421739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.421772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.421988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.422047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.422250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.422305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.422460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.422492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.531 [2024-11-10 00:11:11.422637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.531 [2024-11-10 00:11:11.422671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.531 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.422774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.422824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.422948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.422985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.423128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.423163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.423299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.423334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.423477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.423513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.423709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.423745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.423922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.423975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.424135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.424174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.424336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.424374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.424519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.424553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.424711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.424745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.424939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.424977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.425104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.425154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.425272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.425308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.425430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.425466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.425637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.425686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.425807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.425843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.425994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.426047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.426236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.426289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.426390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.426423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.426557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.426599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.426736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.426770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.426928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.426961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.427238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.427408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.427442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.427601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.427645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.427780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.427816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.427976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.428013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.428156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.428191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.428337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.428373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.428526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.428562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.428707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.428756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.428947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.429002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.429171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.532 [2024-11-10 00:11:11.429223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.532 qpair failed and we were unable to recover it. 00:37:45.532 [2024-11-10 00:11:11.429389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.429443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.429604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.429638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.429793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.429843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.429954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.429988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.430141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.430194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.430359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.430393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.430534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.430728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.430782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.430935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.430975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.431119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.431156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.431356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.431411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.431554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.431599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.431739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.431777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.431892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.431929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.432049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.432086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.432269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.432320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.432493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.432527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.432638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.432805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.432858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.433009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.433207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.433240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.433345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.433378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.433510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.433544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.433659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.433692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.433830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.433865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.434005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.434038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.434175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.434208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.434343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.434375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.434485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.434520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.434660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.434695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.434855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.434893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.435043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.435087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.435233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.435282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.435439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.435475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.435620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.435655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.435782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.435834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.436018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.436070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.533 qpair failed and we were unable to recover it. 00:37:45.533 [2024-11-10 00:11:11.436206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.533 [2024-11-10 00:11:11.436259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.436427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.436461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.436601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.436642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.436800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.436857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.436976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.437011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.437143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.437177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.437357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.437475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.437509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.437637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.437674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.437895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.437932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.438052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.438089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.438242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.438280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.438417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.438453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.438603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.438639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.438798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.438850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.439002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.439058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.439173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.439207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.439358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.439394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.439538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.439577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.439725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.439779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.439931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.439982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.440141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.440189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.440308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.440345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.440506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.440552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.440697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.440754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.440877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.440925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.441066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.441101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.441217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.441249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.441358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.441392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.441516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.441548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.441676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.441710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.441836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.441889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.442054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.442190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.442235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.442419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.442479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.442618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.534 [2024-11-10 00:11:11.442653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.534 qpair failed and we were unable to recover it. 00:37:45.534 [2024-11-10 00:11:11.442782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.442836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.442998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.443048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.443172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.443207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.443317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.443353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.443498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.443537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.443659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.443696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.443884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.443936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.444090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.444144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.444253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.444286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.444437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.444471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.444592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.444627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.444732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.444765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.444905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.444938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.445095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.445129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.445288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.445320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.445428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.445460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.445601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.445655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.445784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.445821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.445964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.446000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.446123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.446161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.446342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.446405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.446514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.446548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.446762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.446815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.447003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.447052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.447205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.447255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.447395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.447429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.447574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.447632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.447867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.448082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.448158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.448360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.448425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.448595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.448629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.448747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.448781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.448953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.449002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.449136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.449340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.449405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.449597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.449635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.449744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.535 [2024-11-10 00:11:11.449780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.535 qpair failed and we were unable to recover it. 00:37:45.535 [2024-11-10 00:11:11.449901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.449934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.450115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.450157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.450337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.450373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.450502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.450539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.450696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.450744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.450933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.450973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.451109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.451144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.451275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.451312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.451470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.451517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.451646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.451683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.451845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.451913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.452106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.452145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.452276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.452339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.452496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.452531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.452658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.452706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.452847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.452915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.453184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.453325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.453360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.453517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.453554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.453701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.453747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.453869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.453904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.454063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.454115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.454261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.454298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.454442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.454475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.454580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.454621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.454772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.454822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.454937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.454973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.455081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.455114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.455250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.455282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.536 [2024-11-10 00:11:11.455430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.536 qpair failed and we were unable to recover it. 00:37:45.536 [2024-11-10 00:11:11.455565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.455604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.455718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.455752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.455867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.455901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.456068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.456241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.456379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.456549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.456696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.456997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.457029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.457129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.457161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.457266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.457306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.457477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.457511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.457670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.457721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.457882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.457917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.458090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.458123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.458307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.458362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.458469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.458505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.458646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.458680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.458837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.458887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.458993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.459026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.459201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.459259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.459427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.459460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.459597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.459647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.459758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.459794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.459969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.460008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.460173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.460231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.460349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.460387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.460545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.460580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.460713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.460749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.460871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.460939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.461148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.461207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.461333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.537 [2024-11-10 00:11:11.461370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.537 qpair failed and we were unable to recover it. 00:37:45.537 [2024-11-10 00:11:11.461481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.461517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.461669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.461703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.461830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.461882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.462034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.462084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.462215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.462277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.462432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.462466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.462642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.462676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.462780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.462812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.462997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.463029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.463127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.463160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.463340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.463387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.463564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.463605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.463746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.463793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.463944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.463978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.464097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.464131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.464266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.464300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.464412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.464446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.464548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.464581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.464726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.464763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.464885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.464939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.465053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.465088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.465267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.465304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.465463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.465497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.465608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.465659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.465784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.465817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.465945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.465995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.466125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.466159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.466279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.466329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.466498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.466677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.466726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.466874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.466911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.467074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.467134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.467261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.467298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.467472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.467520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.538 [2024-11-10 00:11:11.467679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.538 [2024-11-10 00:11:11.467725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.538 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.467861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.467902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.468039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.468073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.468211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.468245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.468376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.468409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.468537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.468570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.468751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.468802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.468931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.468967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.469107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.469151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.469292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.469325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.469461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.469494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.469605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.469646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.469811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.469864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.469997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.470036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.470218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.470273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.470417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.470453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.470604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.470654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.470802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.470835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.471010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.471070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.471265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.471320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.471451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.471484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.471584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.471630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.471747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.471779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.471958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.471995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.472156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.472218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.472350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.472388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.472538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.472575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.472733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.472766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.472911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.472947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.473094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.473131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.473294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.473330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.473512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.473560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.473705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.473752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.473935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.473982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.474127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.474166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.474290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.474327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.539 [2024-11-10 00:11:11.474478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.539 [2024-11-10 00:11:11.474514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.539 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.474664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.474699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.474889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.474942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.475108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.475151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.475321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.475358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.475511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.475550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.475702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.475740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.475893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.475941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.476130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.476166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.476326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.476378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.476523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.476557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.476696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.476731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.476924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.477180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.477245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.477387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.477446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.477628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.477664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.477780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.477814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.478004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.478068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.478235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.478287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.478411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.478448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.478636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.478670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.478799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.478868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.479036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.479075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.479231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.479269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.479389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.479426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.479577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.479639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.479773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.479821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.479960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.480016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.480173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.480229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.480390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.480442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.480573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.480614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.480743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.540 [2024-11-10 00:11:11.480776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.540 qpair failed and we were unable to recover it. 00:37:45.540 [2024-11-10 00:11:11.480888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.480922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.481044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.481078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.481209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.481241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.481341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.481373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.481496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.481544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.481718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.481766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.481890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.481925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.482068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.482107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.482281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.482316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.482436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.482472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.482660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.482698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.482839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.482892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.483115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.483175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.483336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.483409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.483527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.483559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.483699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.483751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.483899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.483950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.484116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.484150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.484249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.484283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.484384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.484418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.484555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.484595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.484731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.484768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.484895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.484928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.485059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.485092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.485210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.485243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.485375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.485408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.485515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.485549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.485703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.485755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.485922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.485962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.486075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.541 [2024-11-10 00:11:11.486126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.541 qpair failed and we were unable to recover it. 00:37:45.541 [2024-11-10 00:11:11.486261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.486295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.486435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.486468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.486619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.486672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.486820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.486888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.487059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.487135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.487390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.487458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.487562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.487607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.487765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.487814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.487949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.488002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.488240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.488309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.488448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.488487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.488673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.488707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.488857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.488906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.489121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.489158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.489357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.489415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.489541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.489574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.489698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.489730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.489845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.489880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.490060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.490112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.490262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.490299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.490452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.490485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.490646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.490704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.490808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.490841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.490996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.491049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.491188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.491222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.491346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.491394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.491569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.491736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.491770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.491901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.491934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.492059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.492093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.492254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.492287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.492431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.492610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.492653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.492800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.492848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.493020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.493054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.542 qpair failed and we were unable to recover it. 00:37:45.542 [2024-11-10 00:11:11.493251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.542 [2024-11-10 00:11:11.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.493452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.493485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.493619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.493660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.493779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.493811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.493971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.494007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.494120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.494156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.494339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.494396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.494544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.494577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.494728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.494776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.494934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.495003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.495138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.495194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.495346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.495388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.495568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.495614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.495737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.495769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.495903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.495953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.496130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.496191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.496433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.496508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.496662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.496705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.496828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.496862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.497067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.497105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.497243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.497311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.497510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.497564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.497728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.497776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.497995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.498060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.498230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.498287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.498442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.498475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.498610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.498647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.498755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.498787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.498908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.498944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.499146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.499214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.499355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.499391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.499570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.499646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.499817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.499865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.500012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.500065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.500270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.500323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.500470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.543 [2024-11-10 00:11:11.500503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.543 qpair failed and we were unable to recover it. 00:37:45.543 [2024-11-10 00:11:11.500662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.500714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.500858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.500908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.501109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.501168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.501372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.501437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.501580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.501649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.501793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.501841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.501998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.502054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.502168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.502206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.502457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.502511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.502687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.502735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.502920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.502967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.503100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.503154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.503361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.503425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.503580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.503623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.503754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.503801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.503942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.503977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.504182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.504256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.504398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.504434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.504573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.504626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.504891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.504939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.505138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.505191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.505348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.505400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.505543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.505576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.505722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.505755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.505892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.505941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.506120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.506154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.506290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.506323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.506449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.506480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.506620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.506654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.506786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.506835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.507018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.507071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.507181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.507217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.507407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.507462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.507639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.507677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.507876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.507931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.544 [2024-11-10 00:11:11.508094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.544 [2024-11-10 00:11:11.508135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.544 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.508390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.508450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.508615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.508657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.508776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.508814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.508926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.508962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.509105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.509142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.509281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.509319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.509504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.509552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.509715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.509751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.509916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.509981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.510186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.510395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.510432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.510593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.510635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.510777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.510810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.510976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.511029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.511291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.511350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.511501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.511538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.511705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.511739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.511866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.511899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.512012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.512064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.512172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.512208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.512364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.512400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.512574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.512653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.512785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.512833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.513192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.513231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.513442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.513481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.513648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.513681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.513816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.513854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.514067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.514126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.514248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.514285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.514515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.514551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.514726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.514760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.514868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.514900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.515047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.515083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.515260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.515298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.545 [2024-11-10 00:11:11.515471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.545 [2024-11-10 00:11:11.515525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.545 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.515687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.515735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.515887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.515927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.516109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.516170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.516384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.516442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.516604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.516645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.516768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.516820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.516945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.516997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.517155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.517208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.517342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.517376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.517486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.517520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.517660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.517708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.517862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.517902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.518009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.518041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.518151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.518188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.518346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.518379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.518569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.518637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.518806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.518858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.518992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.519253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.519302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.519441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.519474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.519609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.519653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.519786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.519842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.519989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.520039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.520148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.520182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.520346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.520379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.520496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.520530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.520681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.520716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.520854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.520889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.546 [2024-11-10 00:11:11.520989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.546 [2024-11-10 00:11:11.521021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.546 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.521157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.521320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.521352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.521507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.521555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.521719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.521772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.521930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.521983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.522158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.522217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.522355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.522390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.522497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.522530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.522661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.522708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.522863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.522899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.523034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.523066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.523183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.523216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.523327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.523359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.523463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.523495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.523651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.523706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.523858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.523908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.524057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.524109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.524235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.524286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.524412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.524445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.524633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.524681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.524793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.524829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.524966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.524998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.525105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.525145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.525277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.525311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.525417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.525626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.525660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.525809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.525857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.526021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.526070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.526276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.526339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.526464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.526498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.526624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.526660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.526818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.526865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.527022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.527075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.527214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.547 [2024-11-10 00:11:11.527265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.547 qpair failed and we were unable to recover it. 00:37:45.547 [2024-11-10 00:11:11.527378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.527413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.527543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.527577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.527748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.527796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.527935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.527975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.528159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.528197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.528341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.528379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.528518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.528552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.528690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.528724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.528841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.528877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.529000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.529036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.529291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.529350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.529501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.529536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.529699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.529758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.529912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.529948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.530151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.530189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.530313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.530371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.530516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.530549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.530669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.530818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.530868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.530986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.531036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.531186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.531224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.531397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.531434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.531616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.531665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.531823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.531893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.532104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.532164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.532322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.532401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.532583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.532624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.532754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.532801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.533058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.533115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.533296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.533394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.533559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.533613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.533773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.533809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.533959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.534028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.548 [2024-11-10 00:11:11.534163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.548 [2024-11-10 00:11:11.534219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.548 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.534433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.534470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.534624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.534658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.534822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.534875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.535010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.535049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.535167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.535204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.535385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.535420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.535541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.535572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.535717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.535750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.535857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.535905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.536087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.536123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.536242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.536278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.536417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.536471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.536660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.536823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.536876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.537022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.537073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.537213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.537247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.537382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.537416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.537577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.537618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.537733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.537782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.537934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.537982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.538128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.538161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.538431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.538491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.538617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.538652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.538830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.538867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.539158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.539230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.539467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.539506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.539672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.539706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.539834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.539869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.540073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.540109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.540304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.540362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.540489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.540522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.540663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.540712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.540857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.540911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.541167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.541225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.549 qpair failed and we were unable to recover it. 00:37:45.549 [2024-11-10 00:11:11.541421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.549 [2024-11-10 00:11:11.541489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.541683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.541717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.541834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.541881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.542097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.542273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.542307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.542420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.542454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.542594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.542629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.542800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.542854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.542979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.543017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.543229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.543288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.543445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.543478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.543603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.543637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.543753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.543800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.543952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.544004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.544246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.544288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.544466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.544505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.544675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.544710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.544817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.544852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.545011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.545080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.545350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.545402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.545596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.545631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.545766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.545799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.545918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.545966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.546129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.546345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.546383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.546534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.546573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.546761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.546809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.546957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.546996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.547143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.547178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.547384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.547444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.547632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.547665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.547788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.547845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.548021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.550 [2024-11-10 00:11:11.548085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.550 qpair failed and we were unable to recover it. 00:37:45.550 [2024-11-10 00:11:11.548334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.548391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.548508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.548546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.548747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.548781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.548932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.548965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.549066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.549118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.549236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.549448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.549674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.549728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.549990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.550029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.550308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.550368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.550520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.550557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.550734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.550769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.550917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.550952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.551178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.551240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.551429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.551466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.551635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.551669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.551816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.551859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.552031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.552069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.552236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.552273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.552403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.552456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.552604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.552667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.552861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.552918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.553127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.553164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.553274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.553310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.553429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.553463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.553572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.553629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.553780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.553814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.553943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.553977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.554105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.554149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.554297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.554345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.554452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.551 [2024-11-10 00:11:11.554487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.551 qpair failed and we were unable to recover it. 00:37:45.551 [2024-11-10 00:11:11.554635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.554671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.554775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.554809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.554946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.554980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.555110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.555143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.555281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.555316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.555424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.555638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.555686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.555902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.556057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.556110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.556264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.556314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.556419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.556452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.556620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.556673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.556839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.556878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.557091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.557158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.557422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.557479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.557594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.557632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.557808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.557879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.558014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.558068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.558317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.558376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.558520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.558557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.558722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.558770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.558956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.559009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.559238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.559297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.559559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.559802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.559834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.559951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.560002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.560125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.560174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.560438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.560493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.560678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.560714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.560816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.560848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.561073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.561110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.561256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.561291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.561404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.561440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.561611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.552 [2024-11-10 00:11:11.561659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.552 qpair failed and we were unable to recover it. 00:37:45.552 [2024-11-10 00:11:11.561791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.561840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.562031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.562274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.562341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.562500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.562534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.562660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.562695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.562874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.562940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.563192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.563231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.563444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.563499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.563682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.563716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.563875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.563923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.564138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.564197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.564343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.564402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.564537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.564570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.564715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.564763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.564880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.564932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.565110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.565173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.565424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.565482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.565655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.565690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.565818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.565890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.566118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.566157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.566335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.566373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.566523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.566713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.566767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.566949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.567001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.567240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.567279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.567505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.567544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.567696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.567731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.567881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.567917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.568118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.568176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.568293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.568331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.568485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.568522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.568710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.568757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.568890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.568938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.553 [2024-11-10 00:11:11.569129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.553 [2024-11-10 00:11:11.569183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.553 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.569361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.569457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.569618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.569652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.569823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.569877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.570097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.570154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.570377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.570433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.570608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.570668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.570789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.570824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.570933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.570967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.571134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.571173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.571309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.571362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.571519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.571557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.571726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.571761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.571916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.571965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.572127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.572342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.572394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.572542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.572600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.572760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.572797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.572926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.572964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.573082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.573130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.573313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.573369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.573542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.573579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.573716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.573752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.573943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.574001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.574127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.574164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.574329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.574363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.574515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.574563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.574807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.574973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.575014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.575158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.575202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.575318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.575368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.575503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.575537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.575645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.575679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.575812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.575860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.576007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.554 [2024-11-10 00:11:11.576043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.554 qpair failed and we were unable to recover it. 00:37:45.554 [2024-11-10 00:11:11.576190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.576242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.576415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.576452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.576603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.576647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.576780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.576813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.576964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.577001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.577161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.577211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.577411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.577447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.577574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.577642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.577774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.577812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.578040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.578076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.578192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.578228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.578466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.578541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.578756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.578804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.579000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.579112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.579325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.579382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.579542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.579575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.579724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.579758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.579874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.579906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.580075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.580131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.580385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.580440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.580591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.580645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.580784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.580832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.581026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.581081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.581268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.581511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.581543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.581681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.581714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.581850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.581901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.582046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.582082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.582263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.582299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.582450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.582487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.582642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.582676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.582826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.582859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.582986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.583019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.555 [2024-11-10 00:11:11.583175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.555 [2024-11-10 00:11:11.583211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.555 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.583329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.583384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.583543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.583604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.583793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.583840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.584017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.584057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.584217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.584254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.584417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.584452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.584625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.584673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.584824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.584860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.585024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.585077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.585271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.585328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.585435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.585470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.585649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.585698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.585836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.585871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.586033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.586066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.586221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.586258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.586394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.586431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.586599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.586648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.586787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.587021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.587075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.587208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.587284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.587418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.587453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.587597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.587638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.587774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.587810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.587928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.587972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.588138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.588171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.588310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.588344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.588467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.588500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.588676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.588730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.588915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.588968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.589121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.589173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.556 [2024-11-10 00:11:11.589338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.556 [2024-11-10 00:11:11.589371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.556 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.589508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.589541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.589741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.589793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.589962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.590042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.590306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.590511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.590545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.590694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.590731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.590863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.590899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.591066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.591102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.591306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.591363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.591491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.591531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.591706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.591739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.591864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.591924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.592096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.592151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.592288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.592344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.592482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.592517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.592654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.592688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.592864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.592912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.593069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.593107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.593239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.593272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.593430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.593466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.593606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.593650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.593767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.593819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.593933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.593969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.594111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.594147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.594328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.594382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.594525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.594605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.594793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.594849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.594988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.595022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.595156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.595190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.595299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.595333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.595466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.595499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.595631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.595665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.595823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.595890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.596093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.596134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.596274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.596325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.596473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.596510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.596709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.557 [2024-11-10 00:11:11.596757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.557 qpair failed and we were unable to recover it. 00:37:45.557 [2024-11-10 00:11:11.596905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.596959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.597150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.597187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.597381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.597437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.597600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.597635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.597793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.597840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.598032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.598092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.598348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.598408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.598517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.598736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.598785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.598954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.598989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.599117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.599154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.599274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.599311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.599443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.599501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.599677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.599731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.599882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.599935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.600164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.600230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.600428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.600481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.600620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.600661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.600784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.600835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.600991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.601043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.601228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.601279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.601436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.601474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.601642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.601677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.601817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.601860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.602048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.602145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.602328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.602386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.602567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.602608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.602769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.602821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.602975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.603030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.603213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.603267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.603378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.603411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.603548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.603582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.603775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.603815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.558 qpair failed and we were unable to recover it. 00:37:45.558 [2024-11-10 00:11:11.603966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.558 [2024-11-10 00:11:11.604003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.604166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.604199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.604450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.604488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.604609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.604664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.604790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.604838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.604983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.605036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.605218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.605256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.605367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.605404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.605596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.605650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.605800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.605836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.605961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.606014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.606157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.606213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.606454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.606509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.606623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.606664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.606771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.606805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.606935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.606967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.607070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.607104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.607240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.607274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.607409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.607442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.607573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.607613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.607817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.607869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.608027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.608077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.608211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.608244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.608375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.608409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.608603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.608685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.608824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.608864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.609014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.609053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.609228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.609266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.609405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.609457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.609622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.609659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.609824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.609876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.610083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.610143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.610396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.559 [2024-11-10 00:11:11.610456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.559 qpair failed and we were unable to recover it. 00:37:45.559 [2024-11-10 00:11:11.610604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.610642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.610754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.610805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.610945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.610983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.611112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.611144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.611307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.611343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.611491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.611528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.611676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.611711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.611845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.611894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.612053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.612086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.612286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.612322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.612464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.612500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.612624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.612674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.612855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.612967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.613009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.613152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.613188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.613296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.613332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.613489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.613543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.613729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.613777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.613924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.613995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.614129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.614166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.614313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.614349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.614474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.614510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.614647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.614698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.614853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.614907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.615023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.615075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.615230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.615269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.615392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.615426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.615619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.615667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.615868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.615923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.616084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.616122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.616232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.616268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.616407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.616443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.616599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.616644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.616794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.616969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.560 [2024-11-10 00:11:11.617120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.560 [2024-11-10 00:11:11.617157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.560 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.617420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.617481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.617644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.617678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.617822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.617855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.617952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.618001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.618156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.618192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.618397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.618434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.618562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.618637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.618793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.618840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.619115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.619174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.619290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.619328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.619452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.619489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.619639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.619674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.619839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.619890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.620040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.620076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.620272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.620308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.620425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.620463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.620650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.620684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.620837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.620891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.621026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.621081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.621279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.621439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.621474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.621642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.621696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.621879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.621933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.622097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.622168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.622368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.622427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.622553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.622596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.622743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.622791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.623017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.623089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.623249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.561 [2024-11-10 00:11:11.623316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.561 qpair failed and we were unable to recover it. 00:37:45.561 [2024-11-10 00:11:11.623438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.623477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.623639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.623675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.623834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.623886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.624108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.624182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.624427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.624466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.624613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.624664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.624802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.624834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.624936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.624969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.625069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.625102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.625247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.625303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.625439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.625472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.625625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.625673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.625845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.625881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.626014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.626068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.626202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.626247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.626431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.626465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.626573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.626616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.626751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.626784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.626968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.627004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.627154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.627226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.627430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.627466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.627605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.627659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.627832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.627950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.627987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.628104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.628140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.628283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.628318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.628501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.628715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.628763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.628903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.628963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.629091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.629130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.629327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.629402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.629518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.629568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.629695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.629729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.562 [2024-11-10 00:11:11.629886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.562 [2024-11-10 00:11:11.629924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.562 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.630034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.630070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.630287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.630354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.630498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.630535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.630694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.630743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.630946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.630999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.631234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.631290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.631443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.631482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.631662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.631696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.631850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.631898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.632075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.632139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.632334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.632396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.632537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.632571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.632710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.632758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.632879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.632934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.633094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.633164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.633365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.633424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.633562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.633603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.633768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.633801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.633997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.634052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.634284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.634323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.634435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.634484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.634742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.634790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.635013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.635067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.635215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.635255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.635370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.635407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.635567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.635607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.635808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.635982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.636018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.636217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.636321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.636445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.636493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.636634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.636667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.636777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.636810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.636946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.636998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.637172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.637208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.563 [2024-11-10 00:11:11.637380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.563 [2024-11-10 00:11:11.637425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.563 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.637569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.637613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.637768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.637801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.637951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.637999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.638133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.638188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.638294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.638328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.638439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.638472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.638599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.638633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.638729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.638762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.638931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.638965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.639074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.639107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.639235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.639267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.639375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.639407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.639537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.639570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.639698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.639731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.639913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.639949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.640098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.640134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.640287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.640323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.640482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.640517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.640683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.640856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.640891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.641043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.641104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.641297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.641364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.641519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.641557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.641714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.641751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.641859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.641896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.642110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.642173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.642319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.642397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.642601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.642655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.642835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.642888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.643049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.643091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.643224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.643259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.643428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.643466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.643613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.643664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.643828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.643880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.644009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.644060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.644223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.644278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.644419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.564 [2024-11-10 00:11:11.644456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.564 qpair failed and we were unable to recover it. 00:37:45.564 [2024-11-10 00:11:11.644627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.644660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.644823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.644873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.645029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.645071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.645221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.645258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.645439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.645475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.645643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.645676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.645815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.645883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.646073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.646126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.646281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.646320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.646449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.646487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.646650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.646684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.646821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.646854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.647040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.647076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.647275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.647311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.647432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.647470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.647624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.647675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.647856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.647922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.648082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.648135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.648291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.648341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.648437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.648471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.648662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.648716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.648914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.648967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.649243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.649454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.649511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.649674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.649707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.649831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.649898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.650029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.650082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.650311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.650349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.650489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.650720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.650768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.650905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.650953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.651076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.651131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.651394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.651452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.651604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.651655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.651785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.651832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.652011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.652070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.652262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.652328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.652463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.565 [2024-11-10 00:11:11.652500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.565 qpair failed and we were unable to recover it. 00:37:45.565 [2024-11-10 00:11:11.652703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.652751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.652890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.653158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.653215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.653406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.653466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.653580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.653636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.653807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.653841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.654059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.654095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.654311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.654365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.654504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.654540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.654683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.654717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.654839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.654886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.655121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.655181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.655376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.655440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.655601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.655635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.655776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.655809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.656040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.656102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.656312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.656367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.656528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.656562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.656727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.656775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.656959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.657013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.657212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.657277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.657393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.657443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.657575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.657618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.657724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.657756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.657910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.657946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.658117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.658190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.658326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.658361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.658486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.566 [2024-11-10 00:11:11.658525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.566 qpair failed and we were unable to recover it. 00:37:45.566 [2024-11-10 00:11:11.658740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.658788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.658941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.658978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.659157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.659211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.659401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.659459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.659571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.659612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.659730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.659764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.659894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.659945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.660136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.660208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.660342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.660393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.660568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.660610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.660730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.660778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.660917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.660989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.661176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.661249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.661449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.661501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.661612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.661647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.661825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.661879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.662037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.662082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.662218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.662252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.662392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.662425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.662579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.662634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.662801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.662866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.663130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.663197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.663461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.663520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.663693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.663728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.663895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.663961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.664129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.664183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.664353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.664401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.664580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.664732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.664768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.664921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.665026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.665221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.665279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.665420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.665453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.665558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.665600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.665776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.665816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.665943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.665980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.666089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.666126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.666352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.666410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.666585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.567 [2024-11-10 00:11:11.666629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.567 qpair failed and we were unable to recover it. 00:37:45.567 [2024-11-10 00:11:11.666780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.666828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.667020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.667059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.667197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.667271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.667439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.667476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.667611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.667647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.667802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.667850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.668045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.668266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.668327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.668474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.668507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.668617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.668652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.668760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.668806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.668959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.669006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.669235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.669290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.669443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.669502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.669612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.669648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.669799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.669852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.670127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.670188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.670407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.670465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.670607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.670647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.670786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.670821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.670942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.670990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.671160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.671195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.671410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.671478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.671641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.671675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.671819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.671872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.672047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.672105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.672337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.672395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.672553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.672593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.672711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.672746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.672882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.672915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.673112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.673170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.673428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.673487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.673669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.673717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.673836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.673872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.674054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.674136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.674380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.568 [2024-11-10 00:11:11.674418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.568 qpair failed and we were unable to recover it. 00:37:45.568 [2024-11-10 00:11:11.674539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.674576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.674744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.674778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.674891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.674941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.675048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.675085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.675398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.675560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.675599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.675749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.675782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.675955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.676013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.676130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.676168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.676319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.676356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.676496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.676545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.676674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.676722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.676904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.676958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.677131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.677172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.677356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.677393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.677535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.677572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.677738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.677771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.677900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.677938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.678109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.678147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.678330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.678367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.678559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.678632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.678797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.678852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.678958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.678997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.679106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.679141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.679390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.679444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.679571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.679627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.679751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.679799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.679951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.680002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.680211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.680309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.680514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.680575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.680717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.680768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.681002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.681059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.681217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.681284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.681415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.681458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.681632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.681680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.681866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.681919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.682130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.682192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.682399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.569 [2024-11-10 00:11:11.682458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.569 qpair failed and we were unable to recover it. 00:37:45.569 [2024-11-10 00:11:11.682633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.682667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.682791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.682830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.683014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.683078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.683331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.683388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.683538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.683574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.683715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.683749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.683898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.683935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.684202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.684255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.684515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.684573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.684713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.684747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.684899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.685153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.685209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.685432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.685470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.685643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.685678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.685812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.685846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.686058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.686124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.686372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.686566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.686612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.686805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.686852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.687071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.687133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.687318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.687376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.687529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.687566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.687733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.687781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.687903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.687939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.688101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.688145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.688349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.688407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.688530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.688567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.688727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.688759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.688880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.688935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.689131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.689165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.689478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.689534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.689676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.689710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.689850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.689882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.690054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.690104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.690271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.690307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.570 [2024-11-10 00:11:11.690458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.570 [2024-11-10 00:11:11.690495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.570 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.690618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.690652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.690804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.690851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.691005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.691043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.691243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.691312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.691486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.691522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.691671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.691705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.691808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.691845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.691964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.692002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.692159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.692196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.692318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.692369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.692525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.692724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.692757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.692980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.693016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.693214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.693271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.693415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.693451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.693652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.693705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.693828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.693864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.693997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.694049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.694159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.694196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.694385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.694427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.694544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.694600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.694766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.694814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.695021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.695060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.695196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.695229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.695383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.695420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.695600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.695642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.695782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.571 [2024-11-10 00:11:11.695818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.571 qpair failed and we were unable to recover it. 00:37:45.571 [2024-11-10 00:11:11.695988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.572 [2024-11-10 00:11:11.696041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.572 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.696180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.696223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.696375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.696413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.696559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.696604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.696736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.696770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.696892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.696945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.697154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.697191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.697365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.697403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.697529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.697566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.697706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.697740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.697890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.697951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.698171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.698226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.698425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.698490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.698609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.698645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.698788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.698847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.699070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.699110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.699258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.699297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.699415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.699451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.699578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.699642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.699794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.699842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.699989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.700023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.700145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.700182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.700403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.700459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.700649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.700683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.700804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.700840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.700959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.700995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.701138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.701174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.701346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.701382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.701503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.701542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.701681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.701715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.701873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.701931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.702139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.702200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.702312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.854 [2024-11-10 00:11:11.702345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.854 qpair failed and we were unable to recover it. 00:37:45.854 [2024-11-10 00:11:11.702473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.702507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.702662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.702702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.702840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.702873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.703018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.703054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.703263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.703319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.703432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.703468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.703661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.703697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.703821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.703876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.704025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.704084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.704252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.704348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.704487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.704533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.704692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.704744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.704908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.704943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.705058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.705091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.705225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.705258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.705367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.705399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.705509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.705542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.705690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.705724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.705877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.705914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.706086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.706122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.706245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.706282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.706437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.706473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.706612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.706647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.706797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.706848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.706952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.706985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.707102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.707139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.707266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.707300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.707472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.707506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.707667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.707701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.707810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.707843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.707957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.707991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.708098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.708131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.708240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.708273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.855 qpair failed and we were unable to recover it. 00:37:45.855 [2024-11-10 00:11:11.708395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.855 [2024-11-10 00:11:11.708448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.708614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.708662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.708794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.708842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.709039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.709076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.709324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.709383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.709522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.709555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.709702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.709736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.709845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.709878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.709988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.710022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.710221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.710345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.710396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.710559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.710603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.710756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.710790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.710897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.710930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.711061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.711094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.711266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.711298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.711523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.711561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.711718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.711751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.711853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.711886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.712024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.712076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.712186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.712224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.712366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.712402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.712571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.712637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.712812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.712847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.712999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.713053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.713225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.713259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.713414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.713462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.713633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.713681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.713797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.713833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.713963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.713999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.714219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.714256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.714371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.714408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.714533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.714566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.714729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.856 [2024-11-10 00:11:11.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.856 qpair failed and we were unable to recover it. 00:37:45.856 [2024-11-10 00:11:11.714992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.715056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.715314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.715377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.715537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.715571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.715754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.715790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.715898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.715932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.716060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.716111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.716217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.716253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.716453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.716489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.716676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.716730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.716866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.716932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.717067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.717120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.717275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.717313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.717455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.717493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.717632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.717667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.717807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.717841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.717994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.718031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.718240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.718277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.718418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.718454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.718564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.718611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.718769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.718802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.718959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.718999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.719159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.719197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.719340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.719393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.719511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.719549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.719712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.719760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.719909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.719945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.720058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.720108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.720318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.720379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.720546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.720580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.720724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.720757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.720926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.721137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.721174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.721287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.721324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.857 qpair failed and we were unable to recover it. 00:37:45.857 [2024-11-10 00:11:11.721471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.857 [2024-11-10 00:11:11.721510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.721695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.721744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.721897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.721933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.722190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.722247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.722488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.722543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.722731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.722779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.722917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.722955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.723141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.723199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.723486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.723522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.723686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.723720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.723856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.723905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.724109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.724162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.724294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.724348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.724485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.724519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.724688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.724736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.724904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.724964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.725184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.725239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.725359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.725393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.725504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.725537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.725656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.725689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.725794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.725845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.725985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.726022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.726237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.726292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.726439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.726476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.726624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.726660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.726819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.726871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.727030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.727082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.727262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.727314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.727450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.727483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.727632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.727686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.727824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.727862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.728071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.728104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.728289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.728321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.728430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.858 [2024-11-10 00:11:11.728464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.858 qpair failed and we were unable to recover it. 00:37:45.858 [2024-11-10 00:11:11.728599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.728632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.728788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.728841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.729018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.729073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.729257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.729313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.729474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.729507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.729673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.729720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.729852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.729900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.730040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.730076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.730213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.730247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.730346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.730379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.730531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.730579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.730754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.730807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.730964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.731015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.731162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.731213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.731322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.731355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.731476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.731514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.731633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.731668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.731784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.731821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.732051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.732108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.732327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.732360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.732500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.732536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.732669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.732727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.732884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.732938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.733182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.733237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.733438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.733498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.733659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.733692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.733819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.733852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.734107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.734167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.859 qpair failed and we were unable to recover it. 00:37:45.859 [2024-11-10 00:11:11.734352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.859 [2024-11-10 00:11:11.734413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.734545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.734605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.734743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.734779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.734970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.735024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.735262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.735321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.735450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.735489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.735649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.735683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.735857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.735910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.736149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.736210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.736493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.736557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.736680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.736715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.736826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.736859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.736991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.737028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.737218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.737255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.737377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.737414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.737579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.737655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.737807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.737855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.737998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.738055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.738271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.738305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.738447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.738481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.738637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.738685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.738809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.738843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.738990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.739028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.739170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.739205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.739306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.739339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.739505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.739539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.739650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.739686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.739826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.739877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.739978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.740011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.740157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.740208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.740315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.740348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.740495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.740530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.740705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.740739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.740870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.860 [2024-11-10 00:11:11.740908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.860 qpair failed and we were unable to recover it. 00:37:45.860 [2024-11-10 00:11:11.741053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.741086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.741251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.741285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.741436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.741470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.741613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.741649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.741790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.741823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.741975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.742026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.742174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.742228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.742359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.742392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.742555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.742594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.742757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.742791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.742915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.742963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.743088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.743124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.743287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.743321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.743438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.743473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.743605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.743640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.743800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.743834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.743985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.744025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.744160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.744212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.744404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.744442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.744612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.744678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.744791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.744826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.744996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.745029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.745196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.745264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.745377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.745415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.745573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.745615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.745737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.745773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.745977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.746010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.746155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.746192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.746404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.746469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.746628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.746663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.746800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.746834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.746968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.747020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.747157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.747193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.861 qpair failed and we were unable to recover it. 00:37:45.861 [2024-11-10 00:11:11.747373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.861 [2024-11-10 00:11:11.747410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.747567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.747630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.747790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.747838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.748025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.748060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.748173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.748206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.748371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.748551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.748599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.748732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.748779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.748962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.749015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.749333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.749546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.749584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.749737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.749772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.749883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.749918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.750058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.750230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.750268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.750406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.750458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.750628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.750663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.750794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.750827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.750959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.751009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.751150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.751188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.751378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.751420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.751611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.751659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.751811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.751858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.752053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.752110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.752330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.752387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.752539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.752572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.752724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.752763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.752915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.752967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.753172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.753237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.753550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.753723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.753757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.753866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.753900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.754035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.754068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.862 [2024-11-10 00:11:11.754270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.862 [2024-11-10 00:11:11.754340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.862 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.754522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.754558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.754758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.754884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.754932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.755147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.755210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.755402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.755459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.755576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.755638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.755764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.755812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.755980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.756032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.756157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.756211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.756356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.756414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.756545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.756579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.756699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.756735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.756885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.756925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.757042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.757075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.757190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.757222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.757361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.757394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.757536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.757568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.757703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.757737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.757847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.757882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.758038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.758090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.758297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.758350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.758489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.758522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.758674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.758709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.758842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.758880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.758999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.759036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.759183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.759218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.759444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.759482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.759600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.759652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.759769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.759817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.760063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.760102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.760279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.760316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.760463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.760500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.760638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.760672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.863 qpair failed and we were unable to recover it. 00:37:45.863 [2024-11-10 00:11:11.760810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.863 [2024-11-10 00:11:11.760843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.760974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.761011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.761183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.761219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.761365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.761402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.761548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.761595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.761772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.761820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.762020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.762075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.762229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.762280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.762433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.762466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.762627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.762677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.762796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.762833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.763000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.763034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.763225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.763289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.763407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.763441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.763568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.763608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.763740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.763792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.763919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.763956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.764154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.764202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.764321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.764357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.764526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.764565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.764707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.764742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.764902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.764950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.765061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.765097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.765201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.765235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.765376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.765410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.765530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.765576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.765723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.765770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.765901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.765955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.766139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.864 [2024-11-10 00:11:11.766192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.864 qpair failed and we were unable to recover it. 00:37:45.864 [2024-11-10 00:11:11.766341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.766393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.766535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.766568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.766721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.766772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.766937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.766986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.767111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.767147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.767296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.767362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.767517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.767550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.767675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.767724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.767844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.767880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.768126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.768164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.768379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.768436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.768634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.768684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.768833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.768868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.769025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.769075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.769250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.769304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.769407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.769440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.769601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.769648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.769760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.769794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.769974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.770027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.770306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.770363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.770486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.770524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.770656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.770691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.770823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.770857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.770999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.771041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.771302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.771373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.771536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.771568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.771720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.771753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.771952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.772005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.772263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.772303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.772446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.772484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.772601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.772656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.772782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.772830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.772992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.773048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.865 [2024-11-10 00:11:11.773212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.865 [2024-11-10 00:11:11.773265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.865 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.773447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.773483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.773608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.773644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.773805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.773856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.774032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.774072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.774292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.774330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.774445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.774482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.774621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.774654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.774762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.774795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.774928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.774960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.775063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.775095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.775237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.775270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.775393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.775440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.775580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.775734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.775768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.775935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.775969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.776104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.776153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.776341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.776398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.776517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.776553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.776741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.776791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.777033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.777086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.777314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.777374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.777542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.777577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.777728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.777761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.777935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.777988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.778124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.778176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.778317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.778371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.778533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.778570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.778737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.778770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.778932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.778998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.779155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.779209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.779424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.779497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.779649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.779685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.779855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.779909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.866 qpair failed and we were unable to recover it. 00:37:45.866 [2024-11-10 00:11:11.780071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.866 [2024-11-10 00:11:11.780112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.780301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.780368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.780547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.780584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.780723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.780765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.780926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.780979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.781186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.781248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.781467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.781524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.781717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.781752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.781899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.781952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.782148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.782187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.782420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.782477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.782605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.782661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.782807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.782840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.782992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.783089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.783350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.783407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.783536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.783568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.783692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.783726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.783890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.783959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.784129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.784169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.784379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.784416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.784527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.784565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.784707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.784740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.784897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.784934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.785178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.785232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.785409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.785446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.785606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.785640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.785757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.785805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.786106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.786180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.786422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.786494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.786635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.786670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.786814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.786851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.787021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.787054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.787199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.787275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.787392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.867 [2024-11-10 00:11:11.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.867 qpair failed and we were unable to recover it. 00:37:45.867 [2024-11-10 00:11:11.787602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.787654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.787797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.787833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.788022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.788060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.788184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.788235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.788370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.788409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.788563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.788617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.788724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.788759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.788867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.788902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.789062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.789098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.789304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.789369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.789497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.789533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.789670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.789708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.789820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.789857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.790010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.790047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.790160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.790197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.790390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.790455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.790609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.790645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.790752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.790786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.790912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.790964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.791144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.791200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.791382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.791434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.791544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.791577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.791705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.791753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.791942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.792004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.792148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.792183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.792429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.792462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.792574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.792617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.792750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.792783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.792903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.792999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.793233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.793269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.793417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.793459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.793652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.793688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.793858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.793911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.794126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.868 [2024-11-10 00:11:11.794186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.868 qpair failed and we were unable to recover it. 00:37:45.868 [2024-11-10 00:11:11.794416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.794453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.794574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.794620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.794761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.794831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.795033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.795105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.795316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.795370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.795503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.795536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.795661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.795695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.795852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.795903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.796058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.796112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.796381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.796442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.796599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.796653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.796792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.796828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.797090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.797148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.797416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.797471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.797629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.797664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.797823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.797876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.798045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.798079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.798264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.798323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.798484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.798520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.798679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.798853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.798904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.799032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.799127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.799291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.799343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.799491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.799528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.799686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.799892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.800100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.800342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.800380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.800528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.800565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.800760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.869 [2024-11-10 00:11:11.800808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.869 qpair failed and we were unable to recover it. 00:37:45.869 [2024-11-10 00:11:11.800979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.801014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.801148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.801348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.801396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.801605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.801639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.801744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.801777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.801907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.801945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.802114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.802150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.802253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.802291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.802432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.802469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.802627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.802660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.802807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.802872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.803030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.803069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.803212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.803265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.803431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.803468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.803634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.803669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.803798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.803830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.803978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.804014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.804136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.804172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.804376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.804413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.804558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.804600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.804759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.804792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.804900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.804932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.805029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.805062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.805216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.805269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.805520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.805559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.805701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.805746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.805933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.805971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.806109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.806143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.806410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.806482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.806619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.806669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.806830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.806864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.806995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.807027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.807157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.807191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.870 [2024-11-10 00:11:11.807327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.870 [2024-11-10 00:11:11.807360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.870 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.807517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.807566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.807715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.807751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.807914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.807947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.808103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.808141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.808389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.808449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.808593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.808627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.808788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.808821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.808959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.808996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.809162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.809196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.809324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.809374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.809564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.809631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.809767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.809803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.809935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.810001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.810300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.810358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.810481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.810516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.810638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.810672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.810812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.810848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.811018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.811051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.811258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.811340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.811499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.811536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.811722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.811755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.811894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.811964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.812117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.812174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.812359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.812391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.812499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.812550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.812709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.812903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.812939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.813105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.813156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.813342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.813378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.813553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.813596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.813732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.813776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.813984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.814042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.814181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.814214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.871 qpair failed and we were unable to recover it. 00:37:45.871 [2024-11-10 00:11:11.814354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.871 [2024-11-10 00:11:11.814388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.814579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.814627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.814804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.814837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.814948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.815000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.815175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.815211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.815344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.815376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.815506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.815539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.815722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.815770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.815918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.815954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.816115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.816152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.816327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.816365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.816490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.816686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.816736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.816880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.816916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.817072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.817120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.817348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.817384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.817492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.817526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.817673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.817708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.817871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.817920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.818067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.818102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.818221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.818255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.818392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.818427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.818566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.818608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.818740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.818787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.818897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.818932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.819110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.819174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.819389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.819449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.819591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.819647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.819824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.819890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.820044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.820085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.820230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.820281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.820436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.820473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.820603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.820639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.872 [2024-11-10 00:11:11.820814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.872 qpair failed and we were unable to recover it. 00:37:45.872 [2024-11-10 00:11:11.820982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.821021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.821160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.821211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.821350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.821387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.821510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.821543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.821657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.821691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.821856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.821889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.822011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.822048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.822238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.822276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.822427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.822467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.822615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.822667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.822799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.822834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.822967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.823019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.823245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.823301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.823430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.823464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.823598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.823633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.823763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.823811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.823964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.824016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.824160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.824226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.824505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.824571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.824758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.824806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.824960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.825016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.825146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.825198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.825448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.825504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.825648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.825683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.825817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.825850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.826013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.826050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.826225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.826263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.826457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.826510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.826656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.826692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.826896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.826973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.827162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.827221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.827418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.827458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.827621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.827656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.873 [2024-11-10 00:11:11.827802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.873 [2024-11-10 00:11:11.827851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.873 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.827989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.828025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.828158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.828193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.828334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.828368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.828503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.828537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.828681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.828734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.828894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.828946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.829123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.829177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.829332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.829384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.829516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.829549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.829705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.829740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.829856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.829911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.830100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.830161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.830415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.830473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.830659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.830701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.830823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.830887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.831024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.831078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.831266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.831300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.831431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.831469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.831628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.831662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.831812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.831877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.832036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.832218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.832256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.832367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.832404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.832523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.832560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.832744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.832792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.833005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.833070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.833317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.833354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.833527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.833755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.833792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.874 [2024-11-10 00:11:11.833989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.874 [2024-11-10 00:11:11.834042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.874 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.834285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.834322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.834497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.834530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.834673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.834708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.834812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.834845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.835061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.835119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.835242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.835278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.835418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.835472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.835643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.835688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.835857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.835892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.835995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.836028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.836181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.836234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.836398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.836431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.836541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.836576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.836716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.836749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.836883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.836916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.837054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.837086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.837232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.837285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.837449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.837485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.837624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.837658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.837770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.837802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.838012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.838065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.838214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.838269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.838381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.838419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.838557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.838608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.838757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.838806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.838994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.839047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.839272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.839328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.839491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.839527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.839712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.839761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.839874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.839910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.840108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.840165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.840378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.840435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.840575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.875 [2024-11-10 00:11:11.840626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.875 qpair failed and we were unable to recover it. 00:37:45.875 [2024-11-10 00:11:11.840784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.840835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.840980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.841015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.841116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.841149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.841313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.841345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.841482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.841515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.841644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.841693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.841870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.841922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.842064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.842098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.842348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.842405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.842540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.842573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.842724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.842776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.842928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.842979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.843223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.843278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.843411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.843444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.843603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.843656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.843836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.844105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.844164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.844382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.844440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.844600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.844655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.844768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.844801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.844904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.844938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.845115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.845170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.845332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.845365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.845508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.845677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.845731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.845898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.845951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.846176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.846235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.846360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.846397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.846551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.846596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.846725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.846779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.846959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.847011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.847164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.847216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.847343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.847376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.876 [2024-11-10 00:11:11.847513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.876 [2024-11-10 00:11:11.847546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.876 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.847702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.847740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.847933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.847987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.848155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.848253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.848424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.848463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.848602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.848668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.848844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.848897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.849049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.849089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.849225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.849260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.849400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.849434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.849571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.849615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.849807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.849844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.849959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.849996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.850145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.850184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.850363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.850403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.850526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.850561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.850686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.850734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.850852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.850905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.851112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.851170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.851433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.851488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.851654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.851689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.851812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.851886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.852096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.852155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.852294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.852328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.852478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.852512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.852653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.852687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.852840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.852907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.853044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.853084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.853284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.853322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.853470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.853506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.853678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.853726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.853938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.853986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.854141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.854195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.854377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.854445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.877 qpair failed and we were unable to recover it. 00:37:45.877 [2024-11-10 00:11:11.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.877 [2024-11-10 00:11:11.854593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.854755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.854804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.854948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.854983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.855175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.855223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.855361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.855397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.855568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.855716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.855751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.855935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.855986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.856094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.856262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.856323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.856459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.856493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.856662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.856822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.856860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.857052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.857089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.857315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.857373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.857486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.857688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.857723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.857910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.857962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.858115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.858164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.858406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.858461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.858577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.858617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.858747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.858799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.859013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.859072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.859305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.859405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.859583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.859627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.859810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.859847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.860025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.860061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.860265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.860326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.860456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.860491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.860630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.860664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.860812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.860860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.861005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.861059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.861329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.861389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.861573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.878 [2024-11-10 00:11:11.861615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.878 qpair failed and we were unable to recover it. 00:37:45.878 [2024-11-10 00:11:11.861765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.861813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.861983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.862023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.862157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.862209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.862391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.862450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.862562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.862606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.862743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.862777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.862911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.862943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.863115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.863149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.863316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.863352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.863491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.863527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.863699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.863732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.863886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.863923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.864116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.864154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.864281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.864318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.864517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.864565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.864720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.864756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.864907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.864963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.865067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.865284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.865338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.865452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.865485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.865610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.865645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.865747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.865780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.865926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.865973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.866144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.866178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.866287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.866321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.866491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.866525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.866715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.866769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.866874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.866907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.867034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.867071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.867331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.879 [2024-11-10 00:11:11.867387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.879 qpair failed and we were unable to recover it. 00:37:45.879 [2024-11-10 00:11:11.867513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.867548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.867726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.867762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.867926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.867966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.868125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.868201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.868364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.868400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.868533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.868567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.868704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.868759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.868875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.868908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.869071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.869105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.869253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.869312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.869534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.869569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.869688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.869721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.869866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.869933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.870132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.870168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.870485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.870551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.870725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.870760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.870959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.871012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.871180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.871220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.871497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.871557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.871722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.871757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.871894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.871928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.872178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.872251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.872495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.872535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.872702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.872737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.872906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.872958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.873098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.873135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.873343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.873379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.873493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.873531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.873720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.873769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.873924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.873973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.874112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.874167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.874308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.874361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.874524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.874558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.880 [2024-11-10 00:11:11.874706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.880 [2024-11-10 00:11:11.874743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.880 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.874879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.874918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.875096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.875133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.875356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.875428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.875639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.875676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.875791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.875825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.875987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.876037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.876236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.876272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.876394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.876431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.876570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.876615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.876790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.876844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.877031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.877085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.877268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.877307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.877448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.877485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.877644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.877679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.877808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.877856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.878037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.878249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.878347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.878523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.878561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.878744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.878898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.878968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.879228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.879285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.879395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.879431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.879552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.879595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.879737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.879770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.879905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.879939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.880138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.880175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.880351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.880388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.880538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.880579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.880743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.880776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.880916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.880969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.881094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.881134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.881335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.881418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.881526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.881563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.881 qpair failed and we were unable to recover it. 00:37:45.881 [2024-11-10 00:11:11.881680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.881 [2024-11-10 00:11:11.881733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.881861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.881894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.882100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.882171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.882403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.882578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.882620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.882728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.882761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.882945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.882995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.883271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.883328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.883478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.883512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.883670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.883723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.883883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.883935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.884179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.884220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.884432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.884635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.884670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.884780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.884816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.885059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.885116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.885332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.885391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.885551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.885596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.885733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.885793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.885961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.886016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.886219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.886282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.886417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.886450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.886554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.886594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.886739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.886790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.886973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.887026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.887193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.887274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.887456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.887489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.887605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.887640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.887799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.887847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.888048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.888087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.888357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.888414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.888556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.888595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.888777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.888826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.888988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.882 [2024-11-10 00:11:11.889028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.882 qpair failed and we were unable to recover it. 00:37:45.882 [2024-11-10 00:11:11.889207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.889258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.889394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.889427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.889559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.889601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.889724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.889771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.889896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.889932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.890046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.890080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.890241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.890275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.890410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.890443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.890598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.890646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.890792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.890828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.890958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.891179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.891214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.891316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.891350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.891478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.891511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.891621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.891656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.891832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.891884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.892087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.892125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.892238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.892277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.892439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.892492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.892680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.892728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.893020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.893266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.893322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.893473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.893529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.893683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.893717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.893938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.893986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.894219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.894292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.894480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.894580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.894727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.894761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.894874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.894907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.895043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.895080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.895276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.895335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.895483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.895519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.895682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.895716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.883 [2024-11-10 00:11:11.895843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.883 [2024-11-10 00:11:11.895876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.883 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.895980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.896012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.896178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.896244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.896417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.896473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.896601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.896650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.896797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.896830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.897018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.897054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.897230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.897412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.897448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.897560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.897607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.897768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.897801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.897955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.897992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.898143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.898180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.898297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.898333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.898454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.898490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.898612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.898663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.898817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.898865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.899060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.899243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.899296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.899463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.899500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.899646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.899680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.899840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.899873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.899995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.900031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.900190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.900228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.900369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.900406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.900567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.900610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.900752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.900785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.900937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.900973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.901111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.901147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.901264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.901306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.901480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.884 [2024-11-10 00:11:11.901528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.884 qpair failed and we were unable to recover it. 00:37:45.884 [2024-11-10 00:11:11.901698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.901751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.901901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.901941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.902114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.902152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.902302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.902339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.902486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.902522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.902687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.902721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.902833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.902871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.903004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.903057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.903244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.903438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.903472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.903637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.903773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.903813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.903971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.904009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.904127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.904163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.904299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.904336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.904508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.904665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.904714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.904877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.904917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.905034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.905073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.905193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.905232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.905385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.905423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.905578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.905636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.905788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.905833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.906035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.906072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.906273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.906329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.906483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.906520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.906650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.906684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.906792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.906824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.906983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.907189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.907225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.907335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.907371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.907522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.907558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.907741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.907789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.885 [2024-11-10 00:11:11.907944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.885 [2024-11-10 00:11:11.907980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.885 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.908178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.908215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.908364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.908403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.908538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.908572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.908702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.908750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.908910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.908956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.909184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.909218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.909428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.909489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.909656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.909690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.909842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.909878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.910025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.910061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.910212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.910251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.910424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.910480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.910634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.910681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.910835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.910883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.911081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.911120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.913741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.913792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.913933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.913975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.914200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.914255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.914508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.914559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.914704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.914738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.914869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.914937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.915071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.915107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.915306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.915363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.915521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.915556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.915697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.915745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.915879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.915928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.916075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.916110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.916321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.916378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.916578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.916624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.916745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.916778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.916942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.916990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.917268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.917333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.917491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.886 [2024-11-10 00:11:11.917529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.886 qpair failed and we were unable to recover it. 00:37:45.886 [2024-11-10 00:11:11.917705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.917740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.917955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.918024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.918141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.918179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.918360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.918416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.918578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.918621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.918759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.918792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.918917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.918955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.919154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.919190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.919362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.919400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.919601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.919654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.919771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.919804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.919943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.919995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.920175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.920213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.920378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.920444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.920626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.920686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.920822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.920888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.921103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.921161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.921396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.921460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.921572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.921630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.921732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.921765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.921890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.921938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.922106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.922162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.922286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.922323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.922482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.922516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.922642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.922691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.922819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.922873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.923006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.923058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.923246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.923311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.923430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.923480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.923597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.923631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.923782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.923831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.923961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.924004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.924215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.924253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.924401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.887 [2024-11-10 00:11:11.924439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.887 qpair failed and we were unable to recover it. 00:37:45.887 [2024-11-10 00:11:11.924615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.924665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.924826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.924859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.925072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.925109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.925367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.925425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.925597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.925637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.925778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.925811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.925911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.925944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.926102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.926135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.926291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.926328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.926452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.926503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.926659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.926693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.926852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.926922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.927082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.927121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.927252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.927304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.927446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.927498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.927640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.927675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.927819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.927852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.928017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.928050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.928159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.928193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.928328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.928364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.928478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.928515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.928644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.928679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.928784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.928817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.929010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.929043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.929141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.929194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.929336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.929372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.929483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.929520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.929665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.929699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.929811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.929844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.930008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.930060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.930255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.930308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.930458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.930492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.930625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.930660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.888 [2024-11-10 00:11:11.930770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.888 [2024-11-10 00:11:11.930802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.888 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.930938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.930989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.931108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.931145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.931287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.931324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.931504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.931537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.931651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.931685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.931792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.931825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.931977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.932015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.932153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.932205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.932358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.932395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.932528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.932561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.932689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.932744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.932903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.933045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.933098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.933275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.933312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.933478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.933511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.933638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.933673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.933814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.933847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.933982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.934016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.934166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.934203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.934329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.934505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.934542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.934703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.934736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.934894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.934932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.935082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.935115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.935251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.935303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.935457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.935494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.935669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.935705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.935837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.935871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.936023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.936062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.889 [2024-11-10 00:11:11.936206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.889 [2024-11-10 00:11:11.936244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.889 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.936398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.936436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.936592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.936626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.936729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.936763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.936919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.936956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.937126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.937163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.937402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.937439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.937583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.937648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.937792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.937826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.937942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.937979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.938211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.938268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.938421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.938459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.938582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.938641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.938802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.938961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.938998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.939136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.939170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.939328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.939366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.939520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.939558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.939691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.939725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.939820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.939853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.940022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.940055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.940232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.940407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.940439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.940547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.940580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.940726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.940759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.940865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.940898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.941030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.941063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.941192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.941225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.941417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.941453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.941575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.890 [2024-11-10 00:11:11.941627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.890 qpair failed and we were unable to recover it. 00:37:45.890 [2024-11-10 00:11:11.941752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.941786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.941899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.941933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.942100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.942246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.942283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.942434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.942467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.942583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.942624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.942736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.942872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.942924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.943083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.943117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.943218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.943250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.943425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.943462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.943582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.943639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.943740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.943773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.943878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.943911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.944075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.944112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.944254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.944292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.944442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.944475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.944578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.944618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.944791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.944829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.944983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.945022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.945190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.945224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.945332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.945365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.945502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.945553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.945707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.945742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.945846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.945879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.946022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.946058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.946218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.946254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.946404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.946442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.946605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.946639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.946779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.946813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.946969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.947007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.947171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.947211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.947364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.947397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.947556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.891 [2024-11-10 00:11:11.947600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.891 qpair failed and we were unable to recover it. 00:37:45.891 [2024-11-10 00:11:11.947736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.947769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.947925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.947962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.948135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.948168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.948275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.948310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.948477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.948514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.948661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.948697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.948863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.948896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.949005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.949038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.949168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.949201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.949362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.949399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.949536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.949570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.949730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.949783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.949942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.949978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.950125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.950162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.950288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.950322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.950427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.950460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.950601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.950652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.950765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.950800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.950931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.950964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.951091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.951124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.951259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.951292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.951421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.951455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.951592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.951626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.951754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.951787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.951900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.951932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.952069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.952106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.952282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.952315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.952475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.952512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.952647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.952684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.952850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.952883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.953012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.953044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.953179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.953212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.892 [2024-11-10 00:11:11.953410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.892 qpair failed and we were unable to recover it. 00:37:45.892 [2024-11-10 00:11:11.953597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.953632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.953769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.953802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.953902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.953935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.954087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.954123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.954270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.954312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.954478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.954517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.954658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.954711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.954852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.954889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.955007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.955044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.955176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.955209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.955351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.955384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.955517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.955549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.955728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.955762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.955906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.955939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.956087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.956123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.956278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.956311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.956450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.956484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.956633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.956683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.956798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.956831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.956937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.956970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.957166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.957300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.957332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.957445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.957478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.957618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.957652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.957829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.957863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.957996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.958029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.958142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.958191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.958348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.958380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.958507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.958540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.958653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.958687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.958787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.958820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.959008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.959044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.959184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.959221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.893 [2024-11-10 00:11:11.959365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.893 [2024-11-10 00:11:11.959417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.893 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.959608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.959802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.959835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.959992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.960030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.960217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.960250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.960363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.960415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.960581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.960620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.960760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.960793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.960965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.960997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.961177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.961214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.961333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.961371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.961489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.961533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.961705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.961738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.961873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.961925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.962080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.962116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.962337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.962450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.962482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.962602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.962636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.962778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.962814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.962935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.962972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.963098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.963131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.963259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.963292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.963455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.963491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.963644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.963678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.963782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.963816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.963952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.963985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.964149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.964186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.964345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.964378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.964477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.964509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.964639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.964672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.964836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.964872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.964980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.965017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.965174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.965207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.965335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.965367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.894 [2024-11-10 00:11:11.965515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.894 [2024-11-10 00:11:11.965551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.894 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.965727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.965761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.965892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.965925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.966098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.966134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.966284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.966320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.966460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.966496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.966646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.966679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.966856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.966893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.967035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.967072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.967239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.967272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.967435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.967471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.967620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.967670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.967812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.967844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.967986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.968020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.968186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.968219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.968353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.968404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.968525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.968561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.968695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.968732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.968891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.968923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.969029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.969081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.969226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.969262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.969414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.969447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.969603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.969637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.969740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.969790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.969970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.970003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.970166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.970369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.970401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.970499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.970532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.970678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.970711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.970833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.895 [2024-11-10 00:11:11.970866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.895 qpair failed and we were unable to recover it. 00:37:45.895 [2024-11-10 00:11:11.971034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.971067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.971179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.971212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.971315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.971348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.971455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.971488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.971629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.971662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.971802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.971835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.971998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.972034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.972140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.972175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.972351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.972384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.972526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.972563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.972737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.972770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.972936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.973068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.973101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.973208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.973240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.973413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.973544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.973577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.973751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.973784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.973890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.973922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.974052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.974262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.974299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.974482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.974514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.974690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.974727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.974891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.974924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.975058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.975091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.975254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.975286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.975443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.975479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.975635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.975672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.975785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.975826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.975981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.976012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.976110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.976142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.976344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.976377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.976539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.976572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.976730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.976762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.896 [2024-11-10 00:11:11.976868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.896 [2024-11-10 00:11:11.976900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.896 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.977080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.977112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.977264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.977302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.977475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.977507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.977650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.977684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.977864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.977900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.978064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.978096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.978236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.978268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.978376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.978409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.978544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.978602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.978732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.978764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.978908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.978940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.979068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.979100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.979257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.979293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.979437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.979474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.979658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.979691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.979825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.979876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.980039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.980071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.980183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.980216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.980356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.980388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.980519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.980551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.980716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.980764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.980900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.980943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.981099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.981132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.981272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.981306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.981412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.981445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.981580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.981623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.981761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.981795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.981984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.982131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.982169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.982286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.982322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.982450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.982482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.982645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.982697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.982838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.982874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.897 qpair failed and we were unable to recover it. 00:37:45.897 [2024-11-10 00:11:11.983019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.897 [2024-11-10 00:11:11.983060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.983213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.983245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.983375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.983425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.983532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.983570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.983708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.983742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.983848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.983882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.983991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.984024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.984201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.984234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.984356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.984389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.984505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.984538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.984678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.984711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.984873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.984926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.985044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.985095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.985258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.985291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.985457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.985507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.985644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.985677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.985817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.985850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.986017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.986050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.986209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.986245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.986426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.986463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.986680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.986717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.986837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.986869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.986975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.987008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.987166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.987203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.987376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.987413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.987545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.987578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.987797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.987830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.987998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.988035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.988146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.988183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.988335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.988368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.988508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.988540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.988756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.988790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.988917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.988953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.989115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.898 [2024-11-10 00:11:11.989148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.898 qpair failed and we were unable to recover it. 00:37:45.898 [2024-11-10 00:11:11.989279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.989311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.989415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.989447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.989602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.989655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.989820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.989853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.989949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.990001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.990253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.990354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.990391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.990611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.990660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.990771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.990804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.991012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.991066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.991203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.991242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.991425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.991458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.991561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.991620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.991778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.991812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.991969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.992005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.992168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.992202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.992335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.992369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.992561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.992603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.992753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.992786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.992955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.992988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.993195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.993254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.993431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.993479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.993594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.993648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.993784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.993817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.993925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.993959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.994210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.994265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.994384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.994420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.994570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.994614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.994726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.994758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.994913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.994949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.995062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.995113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.995274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.995307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.995415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.995448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.995613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.899 [2024-11-10 00:11:11.995647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.899 qpair failed and we were unable to recover it. 00:37:45.899 [2024-11-10 00:11:11.995808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.995845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.995964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.995997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.996129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.996162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.996316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.996352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.996503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.996539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.996698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.996731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.996861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.996913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.997045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.997269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.997302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.997454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.997491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.997644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.997677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.997779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.997812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.997983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.998020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.998152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.998185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.998328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.998376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.998497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.998533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.998675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.998708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.998821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.998854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.999027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.999062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.999184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.999220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.999338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.999374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.999530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.999563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.999672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.999705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:11.999866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:11.999914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.000145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:12.000184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.000365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:12.000399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.000537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:12.000597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.000769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:12.000802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.000985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:12.001021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.001158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.900 [2024-11-10 00:11:12.001191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.900 qpair failed and we were unable to recover it. 00:37:45.900 [2024-11-10 00:11:12.001326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.001359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.001498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.001536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.001666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.001699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.001804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.001836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.001954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.001987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.002118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.002150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.002342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.002379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.002584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.002646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.002792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.002824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.002969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.003022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.003171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.003209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.003345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.003378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.003489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.003522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.003627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.003660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.003788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.003824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.003978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.004010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.004188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.004224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.004369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.004405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.004597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.004636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.004817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.004849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.004980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.005031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.005249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.005309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.005439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.005494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.005626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.005660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.005842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.005878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.006028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.006064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.006184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.006221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.006351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.006384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.006518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.006551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.006698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.006733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.006942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.006979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.007167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.007200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.007297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.007346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.007462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.901 [2024-11-10 00:11:12.007498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.901 qpair failed and we were unable to recover it. 00:37:45.901 [2024-11-10 00:11:12.007673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.007711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.007888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.007921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.008165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.008221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.008361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.008397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.008572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.008612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.008773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.008806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.009038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.009092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.009270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.009332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.009523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.009674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.009707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.009822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.009854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.009991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.010024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.010192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.010224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.010324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.010357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.010467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.010499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.010645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.010679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.010899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.010935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.011082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.011115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.011230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.011263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.011389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.011424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.011604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.011642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.011764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.011797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.011910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.011943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.012101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.012242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.012278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.012399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.012432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.012528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.012560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.012755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.012792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.012983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.013108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.013140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.013304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.013354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.013502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.013539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.013669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.013706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.902 [2024-11-10 00:11:12.013848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.902 [2024-11-10 00:11:12.013881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.902 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.013988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.014022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.014195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.014231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.014372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.014448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.014555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.014595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.014727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.014760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.014886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.014922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.015095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.015131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.015256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.015289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.015457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.015508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.015656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.015694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.015805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.015841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.015967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.016129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.016302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.016440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.016618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.016780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.016939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.016976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.017157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.017283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.017315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.017450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.017484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.017651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.017688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.017821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.017858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.018044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.018181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.018214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.018397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.018433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.018536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.018572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.018717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.018751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.018888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.018921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.019022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.019055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.019210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.019246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.019419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.019452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.019581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.019646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.903 qpair failed and we were unable to recover it. 00:37:45.903 [2024-11-10 00:11:12.019814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.903 [2024-11-10 00:11:12.019847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.020017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.020049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.020235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.020267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.020374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.020426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.020574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.020616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.020748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.020780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.020911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.020943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.021072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.021105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.021207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.021239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.021366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.021398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.021552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.021596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.021722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.021754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.021853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.021905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.022032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.022070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.022198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.022230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.022373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.022406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.022500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.022533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.022691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.022728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.022856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.022888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.023002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.023036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.023197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.023230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.023350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.023386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.023566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.023774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.023810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.023964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.024001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.024127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.024176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.024339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.024372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.024505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.024543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.024704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.024743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.024906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.024939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.025133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.025166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.025300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.025333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.025500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.025536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.025685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.904 [2024-11-10 00:11:12.025722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.904 qpair failed and we were unable to recover it. 00:37:45.904 [2024-11-10 00:11:12.025884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.025916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.026045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.026095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.026236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.026272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.026427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.026463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.026596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.026629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.026764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.026799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.026939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.026972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.027070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.027103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.027243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.027276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.027412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.027444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.027578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.027620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.027789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.027826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.028009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.028041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.028147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.028194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.028306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.028357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.028518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.028554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.028726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.028760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.028895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.028929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.029036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.029069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.029180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.029213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.029340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.029373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.029477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.029510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.029730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.029863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.029895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.030059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.030092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.030187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.030219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.030347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.030380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:45.905 [2024-11-10 00:11:12.030532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:45.905 [2024-11-10 00:11:12.030568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:45.905 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-10 00:11:12.030726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-10 00:11:12.030760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-10 00:11:12.030896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.198 [2024-11-10 00:11:12.030929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.198 qpair failed and we were unable to recover it. 00:37:46.198 [2024-11-10 00:11:12.031021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.031054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.031174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.031338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.031371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.031505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.031538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.031704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.031759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.031903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.031965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.032102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.032152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.199 qpair failed and we were unable to recover it. 00:37:46.199 [2024-11-10 00:11:12.032299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.199 [2024-11-10 00:11:12.032336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-10 00:11:12.032464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-10 00:11:12.032501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-10 00:11:12.032626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-10 00:11:12.032676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-10 00:11:12.032813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-10 00:11:12.032847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-10 00:11:12.032948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-10 00:11:12.032980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.200 qpair failed and we were unable to recover it. 00:37:46.200 [2024-11-10 00:11:12.033112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.200 [2024-11-10 00:11:12.033144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.033252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.033284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.033446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.033478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.033609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.033668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.033776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.033808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.033939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.033972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.034103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.034136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.034273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.034305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.034452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.034489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.034643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.034677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.034777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.034809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.034911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.034944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.035050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.035100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.035251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.035287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.035436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.035468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.035579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.035629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.035741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.035778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.035916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.035968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.036099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.036135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.036278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.036328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.036467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.201 [2024-11-10 00:11:12.036504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.201 qpair failed and we were unable to recover it. 00:37:46.201 [2024-11-10 00:11:12.036645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.036680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.036775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.036809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.036943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.036976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.037226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.037264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.037384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.037420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.037545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.037578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.037708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.037741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.037879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.037917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.038106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.038139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.038247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.038279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.038409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.038442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.038607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.038667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.038770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.038803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.038946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.038979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.039145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.039182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.039358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.039408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.039536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.039573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.039727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.039764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.039897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.039930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.040082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.040119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.040286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.040320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.202 [2024-11-10 00:11:12.040469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.202 [2024-11-10 00:11:12.040507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.202 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.040677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.040711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.040865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.040913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.041093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.041134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.041301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.041342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.041448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.041497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.041663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.041698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.041850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.041901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.042114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.042151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.042300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.042337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.042523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.042561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.042704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.042737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.042868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.042901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.203 [2024-11-10 00:11:12.043037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.203 [2024-11-10 00:11:12.043090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.203 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.043307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.043363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.043471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.043509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.043687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.043722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.043874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.043916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.044056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.044227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.044260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.044444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.044481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.044622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.044675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.044820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.044869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.045035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.045070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.045242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.045275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.045393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.045426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.045565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.045605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.045752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.045785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.045914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.045947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.046082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.046114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.046277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.046319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.046542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.205 [2024-11-10 00:11:12.046700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.205 [2024-11-10 00:11:12.046734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.205 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.046841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.046874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.047007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.047043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.047186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.047223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.047344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.047376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.047511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.047543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.047738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.047775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.047883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.047919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.048082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.048115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.048326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.048362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.048464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.048501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.048622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.048661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.048857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.048890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.048988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.049037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.049214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.049246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.049378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.049411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.049554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.049593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.049713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.049745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.049878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.049911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.050123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.206 [2024-11-10 00:11:12.050159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.206 qpair failed and we were unable to recover it. 00:37:46.206 [2024-11-10 00:11:12.050336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.050368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.050520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.050556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.050703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.050735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.050844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.050876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.051041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.051074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.051173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.051206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.051367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.051399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.051535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.051571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.051712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.051745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.051843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.051876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.052045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.052081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.052237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.052270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.052404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.052437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.052572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.052634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.052811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.052859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.053040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.053075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.053205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.053239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.053374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.053407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.053544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.053601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.053765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.053799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.053936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.207 [2024-11-10 00:11:12.054074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.207 [2024-11-10 00:11:12.054123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.207 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-10 00:11:12.054266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-10 00:11:12.054302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-10 00:11:12.054410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-10 00:11:12.054446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-10 00:11:12.054603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-10 00:11:12.054646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-10 00:11:12.054791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-10 00:11:12.054827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.208 [2024-11-10 00:11:12.055077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.208 [2024-11-10 00:11:12.055113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.208 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.055284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.055320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.055460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.055493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.055628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.055664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.055793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.055842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.055991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.056040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.056191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.056224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.056350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.056383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.056530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.056566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.056813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.056847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.057008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.057041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.057167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.057199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.057409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.057447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.057603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.057641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.057762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.057794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.057909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.057943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.058114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.058147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.058253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.058286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.058415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.058448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.058584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.058626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.058746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.058782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.058907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.058957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.059094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.059127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.059261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.059294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.059444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.209 [2024-11-10 00:11:12.059481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.209 qpair failed and we were unable to recover it. 00:37:46.209 [2024-11-10 00:11:12.059653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.059688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.059833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.059865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.059995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.060047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.060165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.060203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.060325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.060376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.060520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.060553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.060700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.060739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.060881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.060936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.061092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.061130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.061286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.061319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.061463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.061496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.061648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.061682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.061785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.061817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.061960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.061993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.210 [2024-11-10 00:11:12.062099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.210 [2024-11-10 00:11:12.062133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.210 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.062272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.062440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.062472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.062610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.062654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.062773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.062806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.062925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.062973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.063146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.063185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.063332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.063366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.063503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.063536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.063741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.063775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.063946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.063979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.064147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.064180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.064318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.064372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.064569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.064618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.064753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.064786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.064925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.064957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.065105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.065142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.065285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.065321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.065441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.065478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.065652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.065685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.065853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.065886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.211 [2024-11-10 00:11:12.066017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.211 [2024-11-10 00:11:12.066054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.211 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.066169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.066205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.066370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.066420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.066540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.066595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.066716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.066748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.066882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.066915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.067076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.067109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.067271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.067304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.067399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.067432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.067599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.067638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.067779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.067813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.067975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.068008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.068136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.068177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.068320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.068357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.068482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.068516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.068650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.068684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.068846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.068914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-11-10 00:11:12.069037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.212 [2024-11-10 00:11:12.069077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.069213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.069246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.069386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.069420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.069577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.069616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.069739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.069771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.069907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.069939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.070052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.070101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.070273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.070309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.070464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.070500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.070663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.070696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.070811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.070843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-11-10 00:11:12.070972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.213 [2024-11-10 00:11:12.071006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.071157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.071193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.071307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.071339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.071452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.071485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.071625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.071665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.071846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.071882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.072035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.072069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.072174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.072207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.072355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.072392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.072565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.072610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.072763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.072796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.072902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.072949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.073165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.073222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.073365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.073401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.073571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.073614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.073800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.073832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.073952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.073988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.074129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.074165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.074346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.074378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.074545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.074582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.074804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.074853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.075023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.075062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.075196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.075229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-11-10 00:11:12.075389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.214 [2024-11-10 00:11:12.075422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.075558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.075609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.075756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.075789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.075955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.075988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.076094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.076127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.076259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.076291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.076423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.076459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.076599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.076745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.076778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.076963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.076999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.077108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.077144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.077265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.077298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.077425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.077457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.077646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.077679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.077876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.077912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.078069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.078211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.078378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.078542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.078683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.215 [2024-11-10 00:11:12.078824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.215 qpair failed and we were unable to recover it. 00:37:46.215 [2024-11-10 00:11:12.078929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.078961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.079099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.079132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.079291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.079323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.079503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.079540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.079736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.079770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.079905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.079938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.080108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.080140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.080277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.080328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.080454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.080502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.080634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.080668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.080888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.080921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.081019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.081070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.081180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.081217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.081329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.081365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.081521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.081553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.081701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.081734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.081866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.081899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.082049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.082084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.082231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.082263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.082401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.082434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.082558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.082603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.082722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.082755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.082892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.216 [2024-11-10 00:11:12.082931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.216 qpair failed and we were unable to recover it. 00:37:46.216 [2024-11-10 00:11:12.083079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.083115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.083255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.083291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.083436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.083473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.083637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.083670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.083777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.083809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.083968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.084000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.084125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.084175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.084275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.084307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.084441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.084647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.084680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.084822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.084854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.085029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.085061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.085162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.085195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.085371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.085408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.085580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.085746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.085918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.085951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.086053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.086085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.086208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.086244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.086381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.086432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.086573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.086616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.086773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.217 [2024-11-10 00:11:12.086806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.217 qpair failed and we were unable to recover it. 00:37:46.217 [2024-11-10 00:11:12.086916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.086965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.087112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.087144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.087276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.087443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.087476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.087613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.087646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.087814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.087847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.087955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.087987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.088131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.088163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.088331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.088367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.088522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.088554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.088664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.088698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.088816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.088878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.089052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.089105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.089258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.089292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.089405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.089438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.089544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.089583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.089754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.089787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.089915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.089947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.090081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.090115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.090277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.090315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.090460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.090497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.090633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.090667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.218 [2024-11-10 00:11:12.090801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.218 [2024-11-10 00:11:12.090834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.218 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.091004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.091039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.091198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.091249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.091377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.091411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.091541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.091574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.091736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.091785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.091985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.092024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.092190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.092224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.092414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.092477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.092642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.092675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.092808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.092840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.092991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.093024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.093184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.093217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.093403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.093584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.093629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.219 qpair failed and we were unable to recover it. 00:37:46.219 [2024-11-10 00:11:12.093770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.219 [2024-11-10 00:11:12.093803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.093941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.093992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.094103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.094140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.094263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.094299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.094459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.094491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.094643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.094696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.094841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.094894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.095078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.095117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.095240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.095274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.095406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.095439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.095599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.095654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.095760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.095793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.095962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.096095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.096147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.096307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.096356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.096468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.096501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.096658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.096692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.096827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.096860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.097014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.097055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.097225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.097263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.097430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.097463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.097600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.097633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.220 [2024-11-10 00:11:12.097743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.220 [2024-11-10 00:11:12.097777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.220 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.097922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.097958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.098113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.098145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.098259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.098292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.098450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.098482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.098639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.098676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.098857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.098890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.099037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.099073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.099251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.099287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.099428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.099464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.099625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.099659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.099796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.099829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.099958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.099994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.100149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.100182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.100320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.100352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.100480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.100530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.100700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.100734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.100841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.100873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.101005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.101037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.101197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.101246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.101405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.101442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.101592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.101645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.101780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.101814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.101978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.102012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.102143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.102175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.102294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.102330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.102458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.102492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.102629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.102662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.102790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.102833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.102994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.103027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.103187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.103219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.103312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.103360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.103536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.103569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.103727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.103761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.103920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.103953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.104056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.104107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.221 [2024-11-10 00:11:12.104281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.221 [2024-11-10 00:11:12.104322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.221 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.104453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.104489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.104605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.104639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.104748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.104782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.105005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.105053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.105170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.105221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.105428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.105467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.105573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.105618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.105770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.105804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.105944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.105978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.106084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.106117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.106227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.106260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.106388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.106425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.106574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.106617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.106747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.106780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.106944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.106976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.107109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.107160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.107297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.107330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.107464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.107496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.107628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.107662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.107823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.107856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.107984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.108016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.108138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.108171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.108300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.108350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.108506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.108542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.108663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.108697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.222 qpair failed and we were unable to recover it. 00:37:46.222 [2024-11-10 00:11:12.108804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.222 [2024-11-10 00:11:12.108836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.108968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.109005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.109176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.109212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.109384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.109417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.109563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.109604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.109735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.109768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.109915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.109951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.110121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.110157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.110314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.110346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.110502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.110535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.110674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.110707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.110819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.110853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.110993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.111025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.111139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.111189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.111343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.111376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.111522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.111554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.111747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.111781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.111912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.111963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.112138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.112174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.112349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.112385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.112505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.112538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.112653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.112686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.112817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.112849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.223 qpair failed and we were unable to recover it. 00:37:46.223 [2024-11-10 00:11:12.112994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.223 [2024-11-10 00:11:12.113030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.113181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.113213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.113370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.113403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.113597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.113634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.113811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.113844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.114023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.114055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.114152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.114185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.114346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.114379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.114548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.114583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.114719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.114752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.114887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.114919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.115047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.115096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.115254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.115287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.115459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.115491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.115621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.115654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.115788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.115834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.116007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.116044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.116164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.116196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.116333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.116369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.116524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.116560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.116722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.116755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.116917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.116949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.117053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.224 [2024-11-10 00:11:12.117085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.224 qpair failed and we were unable to recover it. 00:37:46.224 [2024-11-10 00:11:12.117218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.117250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.117412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.117447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.117605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.117638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.117750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.117782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.117887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.117918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.118095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.118145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.118277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.118310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.118436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.118468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.118623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.118659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.118780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.118816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.118983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.119016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.119114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.119146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.119309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.119344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.119510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.119542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.119683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.119716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.119848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.119897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.120011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.120047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.120220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.120256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.120417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.120448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.120633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.120670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.120819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.120855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.120992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.225 [2024-11-10 00:11:12.121028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.225 qpair failed and we were unable to recover it. 00:37:46.225 [2024-11-10 00:11:12.121163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.121196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.121323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.121355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.121499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.121535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.121666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.121717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.121875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.121908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.122021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.122072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.122207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.122243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.122357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.122408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.122543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.122575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.122710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.122762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.122910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.122946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.123086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.123122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.123235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.123267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.123393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.123430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.123600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.123637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.123762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.123811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.123970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.124002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.124144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.124180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.124349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.124385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.124519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.124555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.124751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.124784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.124938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.124973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.125078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.226 [2024-11-10 00:11:12.125114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.226 qpair failed and we were unable to recover it. 00:37:46.226 [2024-11-10 00:11:12.125262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.125299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.125465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.125501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.125656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.125689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.125876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.125912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.126065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.126101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.126231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.126263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.126370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.126403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.126576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.126619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.126770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.126803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.126959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.126991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.127098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.127148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.127282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.127325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.127448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.127481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.127609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.127643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.127779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.127831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.127992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.128024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.128185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.128234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.128393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.128426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.128560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.128618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.128745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.128781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.128930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.128962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.129091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.129123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.129219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.129251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.129445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.129477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.129594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.129627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.129763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.129795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.129894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.129927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.130040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.130072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.130203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.130235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.130365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.130398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.130551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.130600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.130754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.130790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.130895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.130930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.131090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.131238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.131377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.131538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.131715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.131884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.131998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.132034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.132178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.132214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.132428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.227 [2024-11-10 00:11:12.132464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.227 qpair failed and we were unable to recover it. 00:37:46.227 [2024-11-10 00:11:12.132606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.132655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.132781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.132814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.132966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.133002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.133121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.133153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.133286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.133319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.133469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.133505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.133674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.133711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.133834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.133866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.133991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.134152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.134292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.134453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.134595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.134725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.134888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.134921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.135029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.135061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.135160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.135192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.135347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.135384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.135526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.135562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.135707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.135741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.135851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.135884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.136047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.136202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.136342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.136476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.136632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.136826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.136984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.137016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.137192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.137233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.137374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.137410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.137512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.137548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.137707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.137741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.137842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.137874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.138088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.138257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.138428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.138601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.138744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.138874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.138971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.139004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.139167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.139292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.139329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.139480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.139516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.139654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.139687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.139815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.139847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.140004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.140040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.140196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.140232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.140385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.140417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.140551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.140592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.140725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.140758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.140890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.140923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.141084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.141117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.228 [2024-11-10 00:11:12.141228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.228 [2024-11-10 00:11:12.141260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.228 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.141376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.141412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.141553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.141596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.141728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.141761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.141897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.141930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.142038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.142072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.142227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.142263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.142411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.142443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.142571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.142609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.142735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.142771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.142922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.142955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.143076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.143108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.143242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.143275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.143420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.143457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.143619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.143670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.143786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.143819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.143938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.143976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.144142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.144174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.144336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.144372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.144532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.144565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.144683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.144715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.144862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.144899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.145084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.145235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.145361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.145583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.145720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.145860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.145927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:46.229 [2024-11-10 00:11:12.146166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.146219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.146385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.146425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.146603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.146637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.146746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.146779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.146952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.146985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.147093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.147128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.147234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.147267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.147391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.147424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.147604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.147652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.147767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.147803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.147942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.147976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.148117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.148263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.148296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.148433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.148466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.148580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.148636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.148775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.148810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.148949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.148982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.149119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.149152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.149319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.149353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.149460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.149492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.149635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.149671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.149782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.149817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.149968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.150019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.150171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.150221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.150387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.150439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.150601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.150654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.150789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.150823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.150944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.150981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.151139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.151183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.151330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.151367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.151563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.229 [2024-11-10 00:11:12.151639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.229 qpair failed and we were unable to recover it. 00:37:46.229 [2024-11-10 00:11:12.151752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.151787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.151956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.152011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.152162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.152214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.152354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.152387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.152532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.152579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.152741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.152775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.152907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.152959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.153072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.153109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.153281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.153317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.153429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.153465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.153646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.153688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.153872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.153924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.154030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.154063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.154230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.154286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.154489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.154522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.154654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.154688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.154842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.154880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.155021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.155057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.155203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.155239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.155374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.155411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.155598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.155632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.155740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.155773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.155906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.155969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.156137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.156189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.156347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.156397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.156570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.156611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.156734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.156781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.156913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.156952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.157102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.157139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.157338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.157395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.157531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.157568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.157772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.157821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.158012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.158065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.158220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.158272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.158389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.158422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.158578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.158633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.158760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.158948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.158984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.159124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.159163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.159310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.159346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.159475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.159528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.159677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.159713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.159924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.159965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.160099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.160159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.160310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.160379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.160534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.160567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.160708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.160741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.160887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.160924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.161130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.161166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.161308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.161344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.161517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.161559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.161742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.161791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.161955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.161996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.162117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.162156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.162340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.162378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.162506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.162559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.162762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.162810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.163005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.163044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.163181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.163233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.163395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.163491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.163626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.163666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.163797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.163830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.163928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.230 [2024-11-10 00:11:12.163960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.230 qpair failed and we were unable to recover it. 00:37:46.230 [2024-11-10 00:11:12.164066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.164099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.164285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.164323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.164436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.164472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.164579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.164747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.164779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.164943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.164997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.165131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.165170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.165346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.165384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.165535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.165572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.165736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.165768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.165933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.165999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.166153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.166209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.166421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.166475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.166644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.166679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.166845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.166995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.167046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.167189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.167224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.167356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.167411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.167595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.167644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.167791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.167844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.167982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.168021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.168132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.168168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.168317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.168352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.168466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.168502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.168656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.168691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.168841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.168896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.169034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.169068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.169226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.169422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.169455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.169599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.169636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.169798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.169864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.170004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.170039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.170177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.170209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.170367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.170399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.170532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.170564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.170696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.170743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.170885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.170938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.171078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.171133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.171310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.171348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.171466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.171503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.171624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.171666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.171792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.171840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.171988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.172045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.172233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.172269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.172417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.172455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.172648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.172682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.172840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.172888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.173080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.173133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.173267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.173319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.173452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.173485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.173673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.173728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.173887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.173927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.174044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.174081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.174255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.174293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.174424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.174478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.174688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.174736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.174875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.174929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.175080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.175133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.175289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.175339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.175476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.175645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.175693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.175815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.175851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.175984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.176019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.231 qpair failed and we were unable to recover it. 00:37:46.231 [2024-11-10 00:11:12.176179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.231 [2024-11-10 00:11:12.176213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.176351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.176384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.176511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.176544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.176688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.176722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.176843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.176916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.177069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.177124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.177290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.177329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.177490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.177524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.177688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.177723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.177849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.177885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.178035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.178073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.178225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.178263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.178463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.178530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.178652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.178688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.178837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.178890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.179000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.179036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.179187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.179240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.179421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.179469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.179630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.179679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.179844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.179893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.180022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.180060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.180279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.180339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.180497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.180530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.180704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.180738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.180884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.180937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.181234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.181298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.181483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.181551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.181722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.181755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.181856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.181908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.182132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.182188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.182372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.182409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.182596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.182664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.182812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.182860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.183001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.183054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.183243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.183303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.183444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.183478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.183614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.183649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.183795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.183830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.183956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.183989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.184127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.184159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.184271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.184305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.184412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.184444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.184579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.184619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.184779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.184837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.184982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.185022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.185134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.185167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.185302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.185335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.185517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.185581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.185736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.185773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.185887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.185921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.186074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.186127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.186225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.186258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.186391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.186424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.186550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.186584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.186736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.186789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.186962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.187239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.187277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.187401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.187436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.187576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.187616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.187734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.232 [2024-11-10 00:11:12.187771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.232 qpair failed and we were unable to recover it. 00:37:46.232 [2024-11-10 00:11:12.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.187977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.188106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.188158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.188319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.188353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.188470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.188518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.188661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.188708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.188865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.188913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.189080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.189119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.189261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.189298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.189443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.189480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.189608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.189643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.189819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.189872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.190012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.190051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.190192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.190230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.190347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.190383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.190537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.190570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.190715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.190750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.190875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.190912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.191090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.191128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.191268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.191305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.191437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.191471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.191595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.191643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.191812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.191846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.191948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.192001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.192171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.192206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.192405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.192481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.192646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.192683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.192847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.192900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.193051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.193102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.193232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.193289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.193423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.193457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.193562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.193605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.193760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.193798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.193955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.194014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.194273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.194330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.194459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.194492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.194596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.194630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.194763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.194810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.195003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.195067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.195272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.195330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.195486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.195523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.195689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.195725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.195854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.195902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.196074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.196114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.196371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.196427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.196582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.196622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.196754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.196788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.196925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.196957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.197163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.197201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.197347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.197383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.197561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.197614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.197747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.197795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.197995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.198049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.198313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.198350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.198509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.198620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.198654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.198795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.198827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.198954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.199004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.199117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.199153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.199292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.199328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.199439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.199475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.199633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.199668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.199793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.233 [2024-11-10 00:11:12.199841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.233 qpair failed and we were unable to recover it. 00:37:46.233 [2024-11-10 00:11:12.200031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.200084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.200203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.200256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.200443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.200501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.200643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.200678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.200824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.200876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.201010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.201045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.201178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.201211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.201345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.201379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.201491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.201524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.201687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.201735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.201855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.201911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.202147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.202200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.202399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.202454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.202614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.202649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.202782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.202834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.202992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.203030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.203247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.203304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.203418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.203455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.203595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.203629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.203789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.203822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.204039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.204072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.204333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.204391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.204515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.204552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.204734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.204782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.205004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.205074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.205276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.205329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.205434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.205469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.205614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.205649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.205829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.205881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.206146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.206206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.206471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.206528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.206660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.206697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.206842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.206881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.207030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.207080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.207280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.207350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.207485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.207518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.207665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.207719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.207886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.207950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.208101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.208141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.208264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.208302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.208420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.208459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.208601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.208813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.208857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.208982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.209018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.209140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.209177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.209376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.209429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.209578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.209620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.209836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.209991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.210045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.210157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.210340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.210388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.210507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.210543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.210706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.210754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.210896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.210931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.211094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.211127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.211233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.211266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.211403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.211437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.211542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.211579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.211737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.211772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.211943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.211985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.212243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.212299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.212562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.212630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.212785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.212819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.234 [2024-11-10 00:11:12.213038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.234 [2024-11-10 00:11:12.213104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.234 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.213222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.213258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.213395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.213446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.213610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.213643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.213804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.213851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.213992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.214044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.214339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.214408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.214553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.214596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.214712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.214746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.214899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.214954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.215136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.215188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.215342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.215394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.215530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.215566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.215705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.215739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.215868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.215905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.216172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.216240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.216411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.216447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.216611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.216646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.216809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.216842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.216993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.217035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.217178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.217214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.217414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.217450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.217577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.217622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.217777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.217810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.218008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.218127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.218164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.218266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.218302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.218444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.218492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.218627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.218675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.218842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.218881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.219041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.219078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.219255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.219292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.219438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.219474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.219612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.219663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.219778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.219817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.219973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.220027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.220280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.220335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.220477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.220511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.220671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.220719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.220860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.220915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.221123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.221159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.221273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.221310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.221464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.221500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.221628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.221662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.221800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.221833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.221995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.222028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.222181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.222278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.222430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.222478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.222640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.222673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.222774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.222807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.222910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.222942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.223126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.223162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.223359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.223408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.223568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.223611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.223780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.223827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.224013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.224050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.224158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.224210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.224351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.224388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.224536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.235 [2024-11-10 00:11:12.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.235 qpair failed and we were unable to recover it. 00:37:46.235 [2024-11-10 00:11:12.224720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.224773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.224883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.224917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.225075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.225112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.225258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.225294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.225400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.225436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.225609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.225657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.225775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.225811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.225951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.225988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.226213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.226271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.226501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.226539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.226705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.226751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.227032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.227085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.227347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.227407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.227560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.227604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.227740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.227773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.227905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.227953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.228150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.228211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.228447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.228501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.228639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.228673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.228834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.228886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.229024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.229076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.229234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.229291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.229429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.229464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.229613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.229682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.229833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.229873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.230070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.230260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.230293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.230407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.230445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.230613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.230662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.230780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.230816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.230968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.231018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.231168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.231223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.231360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.231394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.231498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.231534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.231681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.231749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.231902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.231942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.232133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.232194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.232445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.232500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.232671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.232705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.232835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.232890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.233141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.233217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.233425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.233460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.233582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.233623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.233734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.233767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.233892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.233930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.234085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.234239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.234276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.234414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.234450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.234639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.234673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.234774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.234807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.234953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.234987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.235144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.235181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.235381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.235417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.235561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.235608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.235763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.235798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.235954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.235991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.236105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.236141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.236250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.236285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.236433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.236652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.236701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.236835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.236904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.237039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.237100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.237352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.237436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.236 [2024-11-10 00:11:12.237561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.236 [2024-11-10 00:11:12.237607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.236 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.237744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.237777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.237954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.238031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.238262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.238319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.238492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.238541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.238696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.238731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.238896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.238949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.239138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.239178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.239377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.239416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.239567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.239626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.239760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.239795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.239957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.239994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.240149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.240209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.240462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.240520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.240687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.240721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.240862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.240927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.241206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.241264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.241398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.241457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.241613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.241667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.241805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.241839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.242019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.242056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.242180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.242218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.242364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.242401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.242556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.242612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.242770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.242805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.242948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.242986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.243134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.243172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.243310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.243347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.243488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.243537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.243650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.243685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.243843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.243891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.244083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.244136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.244297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.244336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.244445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.244482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.244638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.244672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.244834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.244886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.245094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.245156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.245335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.245406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.245557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.245602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.245724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.245756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.245852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.245884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.246012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.246045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.246290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.246394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.246578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.246618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.246783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.246832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.247036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.247099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.247239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.247293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.247408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.247445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.247611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.247676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.247860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.247908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.248103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.248158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.248360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.248435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.248577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.248623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.248760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.248794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.248980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.249015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.249123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.249156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.249278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.249313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.249463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.249501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.249674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.249723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.249851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.249905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.250029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.250079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.250203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.237 [2024-11-10 00:11:12.250242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.237 qpair failed and we were unable to recover it. 00:37:46.237 [2024-11-10 00:11:12.250433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.250484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.250678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.250729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.250916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.250963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.251098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.251136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.251246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.251283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.251457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.251492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.251628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.251699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.251833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.251898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.252055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.252094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.252211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.252247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.252358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.252393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.252535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.252583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.252739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.252775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.252940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.252974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.253144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.253196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.253311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.253346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.253462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.253500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.253631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.253666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.253816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.253849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.253994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.254049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.254280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.254316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.254469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.254504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.254652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.254691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.254821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.254886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.255081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.255148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.255336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.255375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.255568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.255646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.255802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.255836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.255993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.256067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.256240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.256334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.256475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.256511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.256679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.256712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.256814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.256847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.256970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.257007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.257167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.257203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.257349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.257390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.257513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.257549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.257705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.257753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.257891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.257938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.258069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.258122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.258293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.258350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.258466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.258502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.258622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.258673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.258786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.258821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.259067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.259130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.259295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.259365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.259512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.259548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.259718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.259756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.259897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.259938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.260092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.260126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.260259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.260310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.260451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.260608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.260643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.260776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.260810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.260931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.260965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.261096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.261158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.261286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.261336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.261491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.261527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.261691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.238 [2024-11-10 00:11:12.261743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.238 qpair failed and we were unable to recover it. 00:37:46.238 [2024-11-10 00:11:12.261901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.261939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.262077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.262130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.262285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.262324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.262486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.262525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.262647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.262680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.262820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.262853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.262993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.263041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.263201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.263236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.263352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.263388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.263520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.263553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.263693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.263741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.263925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.263979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.264131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.264179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.264332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.264387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.264527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.264561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.264733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.264785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.264933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.264986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.265159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.265218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.265358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.265391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.265520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.265555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.265711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.265769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.265939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.265978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.266159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.266196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.266364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.266420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.266558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.266643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.266814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.266866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.267014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.267069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.267224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.267286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.267393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.267427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.267579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.267640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.267771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.267805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.267997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.268033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.268225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.268260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.268377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.268413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.268580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.268643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.268809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.268857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.269012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.269068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.269201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.269235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.269413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.269447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.269559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.269613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.269769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.269806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.269936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.269973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.270121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.270158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.270319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.270382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.270514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.270546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.270672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.270722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.270838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.270876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.271062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.271098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.271240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.271275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.271436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.271472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.271617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.271659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.271791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.271826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.271998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.272054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.272235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.272292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.272450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.272487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.272644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.272691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.272813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.272849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.273018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.273055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.273203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.273241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.273406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.273440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.273580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.273633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.273792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.273829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.239 [2024-11-10 00:11:12.273958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.239 [2024-11-10 00:11:12.273994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.239 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.274142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.274179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.274295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.274331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.274511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.274559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.274737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.274774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.274893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.274946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.275085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.275122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.275303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.275361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.275539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.275575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.275708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.275743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.275924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.276099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.276149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.276329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.276407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.276575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.276621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.276765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.276799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.277012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.277047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.277158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.277194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.277377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.277413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.277546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.277581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.277728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.277770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.277899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.277954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.278107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.278163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.278308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.278358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.278521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.278555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.278707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.278747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.278914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.278966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.279136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.279203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.279396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.279433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.279573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.279636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.279758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.279790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.279899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.279933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.280080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.280114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.280254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.280287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.280423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.280456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.280602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.280657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.280780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.280817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.280977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.281013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.281152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.281189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.281341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.281397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.281547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.281584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.281752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.281806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.281959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.282016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.282121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.282154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.282291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.282325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.282473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.282606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.282781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.282829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.283076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.283133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.283315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.283375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.283545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.283581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.283782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.283829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.283948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.283984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.284136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.284189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.284353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.284403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.284548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.284603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.284753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.284800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.284969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.285009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.285147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.285200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.285375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.285413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.285605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.285660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.285803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.285839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.240 [2024-11-10 00:11:12.286056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.240 [2024-11-10 00:11:12.286096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.240 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.286215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.286257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.286455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.286503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.286660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.286703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.286855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.286924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.287046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.287086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.287226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.287278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.287445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.287479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.287578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.287618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.287757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.287805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.287989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.288027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.288148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.288183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.288294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.288330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.288447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.288483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.288673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.288707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.288905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.288978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.289160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.289198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.289383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.289418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.289533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.289569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.289782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.289830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.290015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.290068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.290243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.290282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.290394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.290430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.290555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.290600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.290732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.290780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.290962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.291031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.291330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.291368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.291475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.291511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.291680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.291714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.291873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.291921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.292080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.292132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.292257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.292314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.292428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.292462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.292623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.292660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.292800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.292833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.292976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.293011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.293115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.293148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.293307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.293341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.293451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.293484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.293620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.293668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.293799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.293834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.293973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.294031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.294220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.294276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.294394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.294446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.294554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.294594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.294751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.294785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.294941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.294977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.295185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.295222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.295355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.295406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.295525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.295561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.295729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.295764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.295916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.295954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.296107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.296144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.296267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.296319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.296463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.296500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.296646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.296680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.296807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.296855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.297016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.297053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.297174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.297227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.297340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.297375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.297527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.297561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.297677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.241 [2024-11-10 00:11:12.297709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.241 qpair failed and we were unable to recover it. 00:37:46.241 [2024-11-10 00:11:12.297819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.297851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.298043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.298077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.298226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.298262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.298384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.298422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.298581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.298641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.298758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.298792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.298925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.298978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.299134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.299187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.299321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.299373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.299509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.299544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.299680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.299728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.299881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.299929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.300097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.300137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.300294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.300332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.300471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.300509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.300672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.300708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.300835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.300887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.301047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.301099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.301226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.301264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.301426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.301480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.301610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.301658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.301773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.301828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.302002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.302039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.302171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.302209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.302396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.302544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.302581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.302814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.302978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.303021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.303218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.303252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.303388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.303425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.303569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.303615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.303742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.303776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.303899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.303948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.304087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.304124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.304234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.304270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.304419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.304466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.304623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.304670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.304799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.304847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.304993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.305158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.305209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.305353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.305388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.305503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.305538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.305688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.305736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.305881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.305920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.306037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.306074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.306218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.306252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.306380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.306413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.306564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.306615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.306735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.306768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.306894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.306927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.307058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.307090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.307262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.307297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.307459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.307497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.307606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.307642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.307836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.307874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.308010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.308047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.308176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.308209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.308363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.308396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.308539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.308573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.308731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.308769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.308912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.308948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.309092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.309125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.309253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.309302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.309447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.242 [2024-11-10 00:11:12.309480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.242 qpair failed and we were unable to recover it. 00:37:46.242 [2024-11-10 00:11:12.309627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.309662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.309795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.309828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.309952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.309986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.310123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.310155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.310259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.310293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.310437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.310475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.310625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.310686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.310803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.310838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.310944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.310977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.311113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.311146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.311256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.311289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.311397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.311430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.311556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.311613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.311738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.311774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.311896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.311944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.312068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.312102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.312235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.312268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.312412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.312444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.312555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.312594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.312727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.312760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.312872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.312906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.313042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.313075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.313187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.313219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.313356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.313388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.313523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.313556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.313680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.313728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.313870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.313906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.314047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.314081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.314244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.314277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.314420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.314469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.314630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.314678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.314791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.314827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.314968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.315141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.315310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.315448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.315632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.315813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.315958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.315992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.316133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.316167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.316309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.316358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.316475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.316510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.316670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.316718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.316827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.316862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.316998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.317031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.317169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.317202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.317338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.317372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.317512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.317549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.317707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.317755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.317889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.317937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.318110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.318146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.318286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.318319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.318446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.318480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.318602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.318650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.318779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.318826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.318963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.318998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.319110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.319143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.319278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.319310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.319411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.319444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.243 [2024-11-10 00:11:12.319565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.243 [2024-11-10 00:11:12.319624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.243 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.319754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.319803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.319944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.319992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.320144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.320180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.320291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.320324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.320460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.320492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.320626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.320661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.320774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.320823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.320946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.320982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.321093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.321128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.321261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.321295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.321441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.321474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.321600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.321664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.321809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.321844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.322192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.322372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.322546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.322701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.322848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.322985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.323019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.323140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.323173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.323319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.323502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.323550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.323679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.323727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.323842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.323876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.324044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.324189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.324322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.324492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.324644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.324830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.324966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.325114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.325282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.325449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.325594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.325733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.325903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.325936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.326071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.326104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.326218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.326266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.326446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.326482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.326618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.326653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.326768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.326802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.326937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.326970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.327119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.327167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.327311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.327346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.327485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.327518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.327653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.327687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.327815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.327849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.327993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.328041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.328154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.328189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.328306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.328344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.328482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.328522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.328649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.328683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.328812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.328846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.329048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.329216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.329384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.329534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.329860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.329990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.330023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.330150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.244 [2024-11-10 00:11:12.330188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.244 qpair failed and we were unable to recover it. 00:37:46.244 [2024-11-10 00:11:12.330354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.330389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.330551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.330584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.330703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.330736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.330875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.330910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.331075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.331108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.331214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.331249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.331388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.331424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.331579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.331635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.331758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.331795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.331931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.331965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.332129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.332163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.332269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.332303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.332409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.332444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.332621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.332657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.332787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.332834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.332991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.333027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.333128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.333162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.333325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.333358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.333497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.333532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.333646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.333683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.333820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.333856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.333996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.334030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.334140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.334174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.334311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.334345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.334470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.334517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.334715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.334763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.334879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.334913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.335018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.335051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.335187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.335220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.335356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.335389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.335516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.335564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.335759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.335807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.335954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.335995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.336136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.336169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.336332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.336366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.336511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.336550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.336693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.336741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.336885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.336920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.337060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.337093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.337201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.337233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.337333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.337366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.337519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.337567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.337735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.337862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.337910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.338056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.338092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.338232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.338267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.338415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.338449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.338555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.338715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.338753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.338896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.338932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.339085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.339120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.339282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.339487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.339520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.339656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.339702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.339836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.339872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.339985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.340167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.340331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.340473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.340639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.340789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.245 qpair failed and we were unable to recover it. 00:37:46.245 [2024-11-10 00:11:12.340924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.245 [2024-11-10 00:11:12.340958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.341100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.341134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.341257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.341305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.341448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.341483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.341613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.341661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.341789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.341825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.341930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.341963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.342065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.342098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.342220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.342255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.342389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.342424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.342521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.342554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.342686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.342725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.342856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.342904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.343022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.343058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.343210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.343245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.343351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.343384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.343511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.343559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.343733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.343768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.343922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.343957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.344094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.344127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.344228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.344261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.344404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.344440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.344550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.344599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.344755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.344792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.344938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.344972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.345080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.345112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.345215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.345249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.345392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.345595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.345643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.345771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.345819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.345934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.345969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.346103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.346137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.346240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.346273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.346383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.346417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.346598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.346743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.346781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.346947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.346980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.347115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.347148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.347261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.347294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.347427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.347463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.347619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.347675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.347820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.347854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.348018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.348052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.348162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.348194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.246 [2024-11-10 00:11:12.348298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.246 [2024-11-10 00:11:12.348330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.246 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.348505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.348553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.348727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.348775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.348960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.349008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.349129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.349166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.349340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.349374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.349486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.349519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.349653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.349701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.349836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.349884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.350032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.350068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.350168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.350201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.350361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.350394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.350509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.350557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.350683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.350718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.350832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.350869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.351015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.351048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.351183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.351216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.351348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.351381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.351498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.351533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.351687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.351736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.351855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.351891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.352002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.352036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.352146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.352178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.352318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.352351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.352485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.352520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.352654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.352690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.352821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.352869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.353024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.353058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.353161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.353193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.353325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.353357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.353464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.353497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.353651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.353698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.353836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.353883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.354036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.354215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.354358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.354542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.354721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.354864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.354997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.355172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.355206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.355347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.355385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.355550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.355610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.355744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.355792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.355932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.355966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.356107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.356140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.356269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.356301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.356406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.356440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.356557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.356599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.356704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.356737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.356849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.356886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.357033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.357080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.357203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.357238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.357344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.357377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.357508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.357540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.357687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.357735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.357859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.357892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.358009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.358043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.358177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.358210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.247 [2024-11-10 00:11:12.358345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.247 [2024-11-10 00:11:12.358378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.247 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.358528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.358561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.358750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.358883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.358916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.359050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.359082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.359212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.359281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.359401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.359434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.359561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.359599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.359713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.359746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.359846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.359878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.360021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.360054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.360219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.360254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.360392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.360425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.360583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.360641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.360819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.360855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.360996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.361333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.361483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.361634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.361789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.361953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.361985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.362145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.362176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.362271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.362303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.362411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.362442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.362554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.362599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.362737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.362785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.362949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.362996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.363134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.363168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.363308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.363342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.363451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.363484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.363599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.363632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.363742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.363779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.363896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.363929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.364071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.364104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.364213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.364247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.364406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.364438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.364564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.364624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.364786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.364835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.364978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.365019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.365177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.365220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.248 [2024-11-10 00:11:12.365354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.248 [2024-11-10 00:11:12.365388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.248 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.365538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.365571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.365724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.365759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.365896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.365929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.366047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.366095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.366209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.366244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.366369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.366417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.366532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.366565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.366707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.366744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.366881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.366916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.367053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.367087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.367226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.367261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.367391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.367424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.367563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.367606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.367725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.367765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.367930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.367962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.368071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.368103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.368239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.368272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.368415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.368525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.368559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.368677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.368713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.368875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.368909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.369023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.369056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.369194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.369230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.532 [2024-11-10 00:11:12.369367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.532 qpair failed and we were unable to recover it. 00:37:46.532 [2024-11-10 00:11:12.369482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.369518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.369622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.369657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.369765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.369803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.369973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.370119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.370299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.370466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.370612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.370761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.370950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.371090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.371124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.371257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.371291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.371401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.371434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.371553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.371597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.371735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.371770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.371877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.371912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.372085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.372213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.372246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.372378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.372411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.372564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.372620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.372743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.372778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.372889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.372923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.373087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.373120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.373224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.373258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.373392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.373425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.373559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.373598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.373723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.373759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.373876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.373910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.374026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.374058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.374201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.374242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.374345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.374379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.374541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.374574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.374722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.533 qpair failed and we were unable to recover it. 00:37:46.533 [2024-11-10 00:11:12.374848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.533 [2024-11-10 00:11:12.374897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.375044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.375080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.375213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.375247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.375350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.375383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.375533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.375582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.375704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.375738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.375878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.375910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.376044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.376076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.376191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.376225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.376375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.376411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.376527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.376573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.376693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.376727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.376900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.377034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.377069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.377189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.377228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.377386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.377421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.377570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.377627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.377775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.377811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.377943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.377976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.378113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.378147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.378286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.378321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.378431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.378467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.378596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.378631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.378776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.378820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.378955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.378989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.379097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.379235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.379267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.379370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.379401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.379511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.379544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.379698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.379747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.379882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.379930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.380040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.380075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.380193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.380228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.380341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.380374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.380506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.380539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.380644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.534 [2024-11-10 00:11:12.380679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.534 qpair failed and we were unable to recover it. 00:37:46.534 [2024-11-10 00:11:12.380801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.380842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.380959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.380994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.381179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.381279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.381312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.381444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.381477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.381595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.381631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.381749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.381784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.381921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.381955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.382089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.382122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.382255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.382289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.382410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.382457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.382573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.382617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.382723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.382757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.382926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.382959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.383073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.383106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.383212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.383246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.383391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.383426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.383578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.383621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.383763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.383797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.383926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.383959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.384072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.384105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.384227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.384259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.384414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.384449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.384553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.384598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.384726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.384773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.384918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.384952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.385054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.385089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.385236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.385269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.385431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.385585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.385640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.385759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.385794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.385978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.386145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.386179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.535 [2024-11-10 00:11:12.386322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.535 [2024-11-10 00:11:12.386354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.535 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.386489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.386521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.386659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.386707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.386867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.386915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.387033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.387068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.387232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.387265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.387399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.387432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.387565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.387610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.387736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.387784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.387919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.387969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.388095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.388130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.388265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.388299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.388401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.388435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.388569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.388608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.388719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.388754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.388868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.388901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.389097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.389244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.389535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.389710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.389887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.389997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.390029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.390164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.390199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.390338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.390372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.390558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.390685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.390720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.390834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.390867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.391017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.391051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.391159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.391194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.391333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.391368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.391493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.391541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.391703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.391751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.391909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.391946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.392056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.392090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.536 [2024-11-10 00:11:12.392250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.536 [2024-11-10 00:11:12.392283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.536 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.392420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.392456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.392580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.392638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.392764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.392803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.392913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.392946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.393087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.393120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.393221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.393253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.393390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.393425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.393560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.393605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.393764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.393812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.393952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.393987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.394097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.394129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.394285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.394323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.394458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.394506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.394672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.394720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.394862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.394898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.395031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.395065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.395170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.395203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.395365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.395398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.395533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.395567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.395729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.395776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.395958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.395995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.396102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.396136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.396249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.396283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.396405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.396442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.396596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.396631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.396750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.396783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.396884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.396916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.397047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.397080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.397247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.397282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.397426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.537 [2024-11-10 00:11:12.397460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.537 qpair failed and we were unable to recover it. 00:37:46.537 [2024-11-10 00:11:12.397597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.397659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.397806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.397841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.398010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.398043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.398147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.398180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.398287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.398321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.398433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.398481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.398637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.398686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.398836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.398873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.399019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.399054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.399190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.399225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.399343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.399391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.399555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.399613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.399788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.399825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.399963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.399998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.400104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.400139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.400300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.400334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.400461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.400495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.400623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.400671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.400826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.400875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.401040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.401075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.401215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.401251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.401387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.401425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.401569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.401612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.401752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.401788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.401958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.401991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.402122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.402155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.402265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.402299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.402436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.402469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.402592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.402641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.402767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.402816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.402936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.402971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.403108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.403142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.403256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.403289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.538 [2024-11-10 00:11:12.403399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.538 [2024-11-10 00:11:12.403433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.538 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.403571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.403619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.403749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.403786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.403937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.403985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.404100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.404134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.404269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.404302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.404440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.404472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.404582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.404624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.404759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.404794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.404930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.404966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.405103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.405136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.405241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.405274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.405436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.405470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.405584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.405630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.405731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.405766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.405905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.405940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.406075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.406110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.406216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.406250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.406417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.406450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.406584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.406625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.406743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.406777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.406889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.406924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.407959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.407997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.408108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.408141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.408279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.408314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.408453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.408486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.408623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.408657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.408757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.408791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.408933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.408981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.409167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.539 [2024-11-10 00:11:12.409215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.539 qpair failed and we were unable to recover it. 00:37:46.539 [2024-11-10 00:11:12.409333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.409367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.409470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.409502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.409633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.409668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.409798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.409831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.409967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.410000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.410138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.410171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.410282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.410315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.410470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.410518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.410656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.410704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.410818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.410853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.411051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.411202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.411379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.411522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.411710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.411851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.411987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.412022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.412172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.412206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.412343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.412379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.412513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.412548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.412674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.412722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.412848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.412884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.413027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.413061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.413200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.413233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.413368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.413415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.413551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.413594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.413733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.413767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.413873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.413906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.414043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.414077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.414232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.414279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.414420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.414455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.414584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.414639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.414754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.414793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.414922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.414956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.415125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.415260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.415293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.540 [2024-11-10 00:11:12.415455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.540 [2024-11-10 00:11:12.415492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.540 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.415635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.415683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.415830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.415868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.415980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.416160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.416305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.416487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.416632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.416791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.416965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.416999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.417145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.417178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.417316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.417349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.417483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.417518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.417653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.417701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.417818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.417852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.417961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.417994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.418100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.418135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.418275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.418308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.418458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.418508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.418628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.418664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.418832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.418871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.418986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.419020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.419155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.419187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.419289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.419323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.419465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.419498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.419634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.419684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.419831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.419865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.420005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.420038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.420171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.420334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.420367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.420478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.420510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.420612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.420645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.420820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.420868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.421009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.421045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.421161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.421194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.421307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.421340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.421499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.421538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.541 [2024-11-10 00:11:12.421669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.541 [2024-11-10 00:11:12.421717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.541 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.421863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.421898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.422009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.422041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.422182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.422214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.422322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.422354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.422512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.422544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.422696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.422732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.422851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.422889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.423062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.423234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.423366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.423544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.423724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.423877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.423984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.424187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.424331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.424473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.424626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.424795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.424928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.424961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.425099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.425132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.425240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.425288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.425429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.425463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.425571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.425611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.425754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.425787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.425936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.425984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.426095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.426130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.426240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.542 [2024-11-10 00:11:12.426275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.542 qpair failed and we were unable to recover it. 00:37:46.542 [2024-11-10 00:11:12.426411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.426444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.426570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.426627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.426743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.426778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.426946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.426982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.427089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.427122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.427271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.427307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.427445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.427478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.427635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.427684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.427829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.427864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.428037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.428181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.428329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.428520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.428680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.428858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.428990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.429024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.429157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.429190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.429294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.429327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.429474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.429521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.429655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.429694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.429831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.429866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.429974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.430007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.430113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.430146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.430250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.430284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.430464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.430600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.430649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.430809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.430845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.430983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.431146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.431308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.431448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.431776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.431919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.431953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.432111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.432144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.432246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.432280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.432388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.543 [2024-11-10 00:11:12.432421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.543 qpair failed and we were unable to recover it. 00:37:46.543 [2024-11-10 00:11:12.432528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.432563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.432697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.432745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.432863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.432898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.433008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.433041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.433179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.433212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.433348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.433381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.433512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.433547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.433723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.433759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.433896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.433929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.434069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.434232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.434400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.434544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.434702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.434856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.434973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.435008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.435172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.435206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.435312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.435347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.435450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.435485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.435639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.435688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.435800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.435835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.435994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.436028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.436186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.436219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.436353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.436387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.436492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.436526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.436654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.436702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.436842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.436880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.436989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.437024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.437158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.437191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.437321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.437354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.437464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.437498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.437647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.437696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.437859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.437896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.438041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.438075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.438243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.438277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.438409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.438442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.438577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.544 qpair failed and we were unable to recover it. 00:37:46.544 [2024-11-10 00:11:12.438732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.544 [2024-11-10 00:11:12.438766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.438919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.438967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.439081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.439118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.439230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.439270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.439368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.439402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.439531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.439563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.439740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.439788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.439932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.439966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.440106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.440140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.440275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.440307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.440464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.440498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.440605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.440652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.440792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.440827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.440990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.441022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.441155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.441188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.441327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.441360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.441509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.441557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.441705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.441753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.441961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.442126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.442160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.442293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.442326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.442435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.442470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.442624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.442672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.442848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.442885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.443017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.443051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.443185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.443218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.443362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.443395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.443528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.443576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.443728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.443762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.443898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.443945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.444077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.444112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.444250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.444283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.444446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.444479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.444640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.444689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.444849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.545 [2024-11-10 00:11:12.444897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.545 qpair failed and we were unable to recover it. 00:37:46.545 [2024-11-10 00:11:12.445045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.445080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.445180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.445213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.445355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.445389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.445576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.445634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.445765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.445813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.445958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.445993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.446132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.446166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.446273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.446305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.446447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.446491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.446716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.446765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.446889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.446936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.447104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.447139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.447276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.447309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.447455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.447492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.447624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.447672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.447859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.447907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.448020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.448056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.448192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.448225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.448333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.448368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.448511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.448545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.448706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.448767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.448893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.448929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.449043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.449076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.449211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.449246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.449356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.449388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.449500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.449536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.449706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.449753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.449921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.449968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.450110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.450145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.450280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.450314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.450476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.450509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.450660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.450697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.450831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.450865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.450975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.546 [2024-11-10 00:11:12.451009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.546 qpair failed and we were unable to recover it. 00:37:46.546 [2024-11-10 00:11:12.451172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.451212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.451383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.451431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.451551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.451595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.451714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.451748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.451885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.451919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.452036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.452071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.452237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.452407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.452439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.452599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.452632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.452766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.452800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.452938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.452975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.453075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.453108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.453246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.453281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.453386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.453420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.453553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.453618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.453732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.453767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.453886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.453921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.454055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.454088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.454196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.454229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.454344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.454377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.454519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.454564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.454712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.454912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.454947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.455050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.455084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.455203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.455235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.455427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.455560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.455606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.455726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.455758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.455914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.455962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.456077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.456113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.456275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.456308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.547 [2024-11-10 00:11:12.456419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.547 [2024-11-10 00:11:12.456453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.547 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.456576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.456634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.456746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.456781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.456927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.457099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.457133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.457235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.457267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.457417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.457462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.457604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.457659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.457784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.457833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.457982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.458018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.458133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.458167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.458337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.458372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.458505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.458551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.458716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.458764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.458869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.458917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.459119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.459260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.459400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.459557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.459713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.459856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.459970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.460009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.460112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.460146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.460316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.460356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.460512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.460561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.460686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.460720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.460820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.460853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.461012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.461045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.461177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.461209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.461320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.461355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.461506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.461555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.461692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.461740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.461908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.461944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.462049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.462085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.462191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.462225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.462336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.462370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.462473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.548 [2024-11-10 00:11:12.462507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.548 qpair failed and we were unable to recover it. 00:37:46.548 [2024-11-10 00:11:12.462647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.462694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.462842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.462877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.463044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.463078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.463240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.463273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.463384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.463417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.463537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.463592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.463748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.463796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.463914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.463950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.464087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.464120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.464280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.464312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.464428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.464466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.464618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3642373 Killed "${NVMF_APP[@]}" "$@" 00:37:46.549 [2024-11-10 00:11:12.464667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.464840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.464893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:46.549 [2024-11-10 00:11:12.465006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.465042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.465153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:46.549 [2024-11-10 00:11:12.465187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:46.549 [2024-11-10 00:11:12.465327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.465361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:46.549 [2024-11-10 00:11:12.465470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.465503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.549 [2024-11-10 00:11:12.465648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.465685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.465839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.465886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.466030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.466066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.466230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.466264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.466393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.466426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.466528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.466561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.466688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.466722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.466868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.466903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.467065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.467203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.467343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.467548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.467722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.549 [2024-11-10 00:11:12.467862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.549 qpair failed and we were unable to recover it. 00:37:46.549 [2024-11-10 00:11:12.467975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.468143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.468314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.468483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.468669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.468813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.468964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.468997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.469138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.469171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.469312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.469346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.469459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.469491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3642945 00:37:46.550 [2024-11-10 00:11:12.469622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.469657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3642945 00:37:46.550 [2024-11-10 00:11:12.469760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.469795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.469893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.469927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3642945 ']' 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.470030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.470063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.470191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:46.550 [2024-11-10 00:11:12.470224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.550 [2024-11-10 00:11:12.470327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.550 [2024-11-10 00:11:12.470364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:46.550 00:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.550 [2024-11-10 00:11:12.471194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.471245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.471392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.471426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.471556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.471597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.471714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.471751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.471876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.471909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.472073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.472106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.472223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.472257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.472370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.472404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.475606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.475661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.475856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.475895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.476048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.476085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.476240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.476277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.476440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.476485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.476639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.476675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.476804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.476841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.550 [2024-11-10 00:11:12.476983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.550 [2024-11-10 00:11:12.477017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.550 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.477169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.477204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.477312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.477347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.477488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.477522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.477649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.477683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.477841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.477876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.477974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.478008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.478147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.478181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.478295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.478329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.478460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.478509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.478672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.478726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.478883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.478928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.479087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.479127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.479268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.479314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.479436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.479471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.479598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.479642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.479790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.479839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.479967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.480002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.480165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.480198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.480331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.480365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.480503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.480539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.480664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.480698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.480825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.480866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.480982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.481017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.481141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.481174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.481289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.481325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.481500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.481535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.481658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.481692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.482604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.482661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.482840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.482877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.483048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.483084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.551 [2024-11-10 00:11:12.483196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.551 [2024-11-10 00:11:12.483231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.551 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.483406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.483443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.483616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.483653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.485075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.485129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.485283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.485318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.485462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.485496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.485628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.485664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.485770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.485803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.485954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.485998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.486106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.486139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.486272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.486305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.486437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.486471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.486613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.486647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.486754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.486787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.486983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.487032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.487159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.487195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.487313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.487348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.487489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.487521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.487641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.487675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.487795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.487832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.487983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.488018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.488132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.488166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.488309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.488343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.488475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.488507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.488630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.488664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.488801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.488834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.488985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.489122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.489275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.489413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.489574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.489741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.489924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.489956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.490071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.552 [2024-11-10 00:11:12.490103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.552 qpair failed and we were unable to recover it. 00:37:46.552 [2024-11-10 00:11:12.490230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.490277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.491814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.492048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.492086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.492252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.492288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.492462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.492497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.492651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.492699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.492821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.492857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.493021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.493054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.493173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.493212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.493348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.493381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.493534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.493568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.493718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.493752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.493871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.493904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.494051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.494218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.494389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.494522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.494674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.494810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.494972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.495122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.495273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.495440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.495624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.495767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.495944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.495982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.496140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.496173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.496271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.496304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.496418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.496451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.496555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.496597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.496713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.496747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.496905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.496938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.497040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.497072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.497218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.497254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.497389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.497422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.497547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.497579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.553 [2024-11-10 00:11:12.497710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.553 [2024-11-10 00:11:12.497744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.553 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.497902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.497941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.498079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.498238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.498372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.498521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.498669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.498841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.498991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.499136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.499271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.499424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.499615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.499760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.499918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.499967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.500141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.500186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.500361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.500402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.500542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.500593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.500698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.500731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.500868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.500910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.501042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.501083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.501200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.501233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.501393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.501428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.501536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.501569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.501739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.501787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.501939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.501977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.502118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.502153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.502299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.502449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.502483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.502622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.502664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.502777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.502811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.502926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.502960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.503076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.503109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.503266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.503300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.503463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.503500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.503641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.503675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.503782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.554 [2024-11-10 00:11:12.503814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.554 qpair failed and we were unable to recover it. 00:37:46.554 [2024-11-10 00:11:12.503953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.503986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.504121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.504153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.504268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.504302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.504441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.504476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.504596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.504631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.504743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.504777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.504897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.504930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.505047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.505080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.505204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.505237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.505372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.505405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.505583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.505624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.505760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.505794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.505915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.505949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.506083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.506116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.506231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.506265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.506362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.506395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.506529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.506562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.506725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.506769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.506899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.506937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.507110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.507147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.507260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.507294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.507472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.507507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.507630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.507664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.507782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.507823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.507939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.507972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.508090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.508122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.508291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.508326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.508459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.508492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.508636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.508670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.508805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.508840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.509015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.509056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.509178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.509214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.509359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.555 [2024-11-10 00:11:12.509397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.555 qpair failed and we were unable to recover it. 00:37:46.555 [2024-11-10 00:11:12.509555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.509598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.509764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.509798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.509922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.509958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.510106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.510139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.510277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.510311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.510427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.510462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.510627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.510675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.510796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.510831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.511012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.511064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.511211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.511249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.511365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.511399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.511545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.511595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.511728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.511761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.511869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.511912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.512040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.512078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.512220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.512256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.512365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.512398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.512533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.512566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.512740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.512780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.512908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.512944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.513080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.513114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.513241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.513278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.513413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.513448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.513603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.513651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.513773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.513807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.513973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.514007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.514159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.514198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.514338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.514373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.514492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.514527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.514664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.556 [2024-11-10 00:11:12.514697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.556 qpair failed and we were unable to recover it. 00:37:46.556 [2024-11-10 00:11:12.514879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.514934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.515056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.515091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.515268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.515307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.515441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.515475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.515643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.515677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.515833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.515873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.515999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.516037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.516178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.516213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.516365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.516402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.516535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.516597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.516744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.516780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.516916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.516955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.517066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.517100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.517284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.517328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.517430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.517463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.517602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.517637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.517769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.517803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.517940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.517973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.518124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.518158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.518294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.518326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.518459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.518492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.518645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.518679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.518785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.518818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.518953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.518987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.519143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.519175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.519276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.519308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.519444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.519478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.519612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.519646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.519749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.519781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.519893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.519925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.520090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.520133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.520308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.520345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.520506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.557 [2024-11-10 00:11:12.520554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.557 qpair failed and we were unable to recover it. 00:37:46.557 [2024-11-10 00:11:12.520705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.520738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.520842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.520874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.520984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.521017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.521172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.521206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.521328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.521360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.521522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.521563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.521700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.521746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.521844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.521880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.522001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.522035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.522143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.522187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.522331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.522363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.522537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.522577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.522718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.522752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.522859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.522892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.523023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.523064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.523183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.523218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.523355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.523394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.523507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.523541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.523692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.523724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.523835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.523867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.524026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.524070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.524189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.524225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.524372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.524407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.524540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.524584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.524708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.524768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.524901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.525092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.525127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.525261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.525295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.525454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.525494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.525630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.525663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.525803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.525837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.525960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.525994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.526108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.526149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.526299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.526335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.526479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.526514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.558 qpair failed and we were unable to recover it. 00:37:46.558 [2024-11-10 00:11:12.526671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.558 [2024-11-10 00:11:12.526705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.526860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.526917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.527038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.527080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.527251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.527290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.527406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.527441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.527584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.527624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.527822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.527943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.527976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.528113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.528146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.528287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.528319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.528492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.528524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.528645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.528678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.528797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.528835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.528978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.529022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.529165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.529202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.529312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.529347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.529531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.529577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.532602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.532645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.532798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.532837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.532973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.533010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.533128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.533164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.533323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.533363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.533468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.533503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.533684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.533719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.533851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.533910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.534082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.534118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.534235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.534277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.534395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.534443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.534566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.534621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.534731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.534766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.534877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.534912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.535058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.535094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.535234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.535269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.535406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.535440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.535544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.535577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.535706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.535739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.535878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.535915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.536047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.559 [2024-11-10 00:11:12.536095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.559 qpair failed and we were unable to recover it. 00:37:46.559 [2024-11-10 00:11:12.536238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.536274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.536420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.536611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.536650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.536773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.536809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.536951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.536987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.537135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.537276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.537322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.537461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.537497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.537641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.537676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.537819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.537853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.537972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.538007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.538174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.538210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.538323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.538358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.538492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.538527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.538656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.538691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.538804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.538839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.538973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.539007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.539128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.539168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.542608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.542664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.542813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.542850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.542996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.543036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.543168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.543203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.543392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.543428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.543540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.543593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.543716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.543750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.543892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.543927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.544041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.544076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.544247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.544292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.544417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.544452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.544610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.544659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.544946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.545006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.545121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.545158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.545325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.545360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.545521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.545554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.545732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.560 [2024-11-10 00:11:12.545767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.560 qpair failed and we were unable to recover it. 00:37:46.560 [2024-11-10 00:11:12.545932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.545973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.546111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.546155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.546272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.546308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.546447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.546483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.546632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.546667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.546807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.546842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.546995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.547030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.547139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.547173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.547311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.547349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.547488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.547522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.547655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.547690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.547844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.547892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.548036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.548071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.548216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.548250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.548393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.548426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.548569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.548633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.548785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.548822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.548998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.549038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.549225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.549263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.549385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.549463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.549617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.549664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.549834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.549892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.550080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.550115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.550249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.550282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.550415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.550448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.550584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.550625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.550756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.550789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.550939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.550972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.551079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.551118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.561 [2024-11-10 00:11:12.551249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.561 [2024-11-10 00:11:12.551283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.561 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.551415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.551447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.551550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.551595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.551751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.551799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.551957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.551994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.552148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.552181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.552310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.552344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.552508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.552551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.552716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.552750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.552896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.552930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.553040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.553073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.553193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.553226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.553345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.553388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.553536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.553690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.553723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.553837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.553870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.554035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.554174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.554337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.554508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.554695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.554836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.554981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.555014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.555129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.555170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.555330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.555363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.555501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.555655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.555693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.555799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.555831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.555978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.556011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.556123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.556156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.556309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.556345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.556508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.556642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.556676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.556810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.556844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.556986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.562 [2024-11-10 00:11:12.557026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.562 qpair failed and we were unable to recover it. 00:37:46.562 [2024-11-10 00:11:12.557126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.557159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.557319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.557354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.557520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.557553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.557713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.557746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.557857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.557900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.558076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.558109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.558252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.558286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.558431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.558471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.558630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.558664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.558814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.558849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.558994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.559027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.559160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.559194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.559316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.559358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.559486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.559519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.559657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.559692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.559830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.559864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.560020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.560062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.560207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.560241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.560394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.560427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.560535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.560568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.560740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.560774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.560924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.560966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.561099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.561276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.561309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.561425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.561459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.561659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.561804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.561839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.561961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.561995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.562153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.562196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.562313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.562353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.562487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.562520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.562688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.562730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.562840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.562875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.562994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.563028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.563140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.563174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.563 [2024-11-10 00:11:12.563337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.563 [2024-11-10 00:11:12.563380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.563 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.563504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.563538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.563663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.563698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.563835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.563867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.563988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.564021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.564153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.564197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.564344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.564376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.564492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.564525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.564668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.564704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.564834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.564869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.565100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.565236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.565398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565409] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:46.564 [2024-11-10 00:11:12.565538] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.564 [2024-11-10 00:11:12.565558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.565605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.565745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.565881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.565989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.566022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.566174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.566207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.566370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.566403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.566542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.566594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.566710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.566743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.566874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.566917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.567065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.567100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.567215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.567250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.567398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.567432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.567540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.567579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.567735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.567769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.567875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.567918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.568069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.568254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.568397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.568555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.568714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.568882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.568995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.569028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.569170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.569203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.569348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.564 [2024-11-10 00:11:12.569390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.564 qpair failed and we were unable to recover it. 00:37:46.564 [2024-11-10 00:11:12.569499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.569532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.569659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.569693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.569802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.569836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.570011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.570062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.570221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.570257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.570426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.570564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.570618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.570754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.570788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.570923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.570957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.571138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.571171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.571287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.571321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.571459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.571498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.571666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.571702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.571831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.571865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.572048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.572212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.572355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.572534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.572703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.572836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.572999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.573032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.573133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.573166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.573302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.573339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.573455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.573490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.573640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.573676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.573803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.573852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.573998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.574036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.574174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.574209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.574356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.574401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.574535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.574569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.574723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.574757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.574869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.574916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.575052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.575085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.575264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.575299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.575409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.575442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.575550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.565 [2024-11-10 00:11:12.575584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.565 qpair failed and we were unable to recover it. 00:37:46.565 [2024-11-10 00:11:12.575725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.575759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.575899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.575934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.576089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.576123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.576264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.576296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.576434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.576467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.576612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.576647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.576749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.576782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.576921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.576960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.577067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.577100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.577217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.577250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.577395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.577428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.577568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.577616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.577792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.577841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.578005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.578042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.578155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.578190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.578335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.578374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.578533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.578598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.578746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.578783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.578918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.578952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.579089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.579124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.579234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.579269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.579375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.579409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.579553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.579596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.579734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.579768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.579903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.579946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.580115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.580148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.580258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.580292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.580420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.580455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.580618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.580653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.580793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.580826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.580972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.581005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.581158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.581206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.581329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.581364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.566 qpair failed and we were unable to recover it. 00:37:46.566 [2024-11-10 00:11:12.581500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.566 [2024-11-10 00:11:12.581534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.581687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.581721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.581857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.581890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.582057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.582090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.582227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.582260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.582393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.582426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.582543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.582583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.582720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.582753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.582888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.582922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.583068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.583102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.583249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.583284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.583422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.583455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.583600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.583634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.583792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.583825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.583949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.583982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.584121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.584153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.584266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.584404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.584436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.584538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.584572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.584704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.584737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.584869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.584903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.585034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.585067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.585195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.585235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.585334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.585367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.585543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.585597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.585744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.585778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.585916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.585950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.586085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.586118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.586291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.586325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.586451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.586484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.586613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.586649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.586801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.586976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.587086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.587119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.587236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.587268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.587404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.567 [2024-11-10 00:11:12.587436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.567 qpair failed and we were unable to recover it. 00:37:46.567 [2024-11-10 00:11:12.587555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.587607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.587731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.587779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.587898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.587935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.588105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.588139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.588278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.588312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.588425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.588461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.588563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.588611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.588721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.588755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.588892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.588926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.589087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.589120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.589258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.589291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.589455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.589488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.589646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.589684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.589805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.589840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.589962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.589994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.590122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.590154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.590269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.590307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.590409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.590444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.590599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.590633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.590768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.590802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.590947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.590981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.591112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.591146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.591278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.591312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.591483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.591517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.591692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.591726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.591878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.591934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.592085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.592125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.592247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.592279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.592394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.592427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.592563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.592615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.592734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.592782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.592900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.592936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.593047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.593082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.593215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.593250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.568 qpair failed and we were unable to recover it. 00:37:46.568 [2024-11-10 00:11:12.593356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.568 [2024-11-10 00:11:12.593390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.593503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.593538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.593700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.593735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.593900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.593948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.594058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.594108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.594242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.594276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.594415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.594449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.594555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.594616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.594753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.594787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.594895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.594936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.595055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.595089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.595192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.595225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.595360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.595393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.595506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.595540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.595732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.595848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.596957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.596991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.597092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.597125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.597242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.597277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.597409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.597443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.597551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.597584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.597708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.597741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.597847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.597879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.598022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.598054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.598186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.598221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.569 [2024-11-10 00:11:12.598337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.569 [2024-11-10 00:11:12.598371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.569 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.598499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.598688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.598723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.598892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.598925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.599050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.599082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.599196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.599229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.599357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.599388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.599494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.599526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.599692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.599740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.599862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.599908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.600075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.600121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.600217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.600251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.600387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.600422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.600536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.600569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.600726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.600761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.600905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.600947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.601048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.601081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.601239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.601271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.601389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.601420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.601521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.601554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.601680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.601817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.601851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.602036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.602085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.602257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.602291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.602398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.602431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.602583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.602629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.602764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.602798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.602924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.603093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.603125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.603260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.603293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.603425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.603457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.603614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.603663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.603783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.603822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.603945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.603980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.604131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.604166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.604279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.604325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.604487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.604521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.570 [2024-11-10 00:11:12.604648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.570 [2024-11-10 00:11:12.604683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.570 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.604785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.604817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.604928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.604961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.605088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.605120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.605238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.605275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.605388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.605420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.605532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.605567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.605703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.605751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.605897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.605932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.606102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.606237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.606375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.606529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.606752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.606890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.606999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.607033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.607168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.607200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.607333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.607365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.607503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.607539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.607708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.607757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.607906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.607945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.608084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.608120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.608262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.608297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.608454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.608488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.608632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.608667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.608773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.608806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.608950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.608981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.609110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.609143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.609277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.609310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.609416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.609448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.571 [2024-11-10 00:11:12.609565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.571 [2024-11-10 00:11:12.609615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.571 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.609752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.609792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.609946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.609980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.610117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.610152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.610262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.610296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.610410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.610443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.610547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.610580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.610783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.610831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.610983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.611018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.611162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.611196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.611331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.611365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.611503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.611694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.611729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.611877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.611911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.612059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.612094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.612265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.612300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.612432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.612466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.612595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.612629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.612765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.612799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.612949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.612984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.613117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.613151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.613260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.613294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.613428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.613463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.613597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.613632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.613786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.613833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.613982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.614016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.614152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.614186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.614328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.614362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.614519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.614552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.614701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.614761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.614942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.614978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.615149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.615286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.615319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.572 [2024-11-10 00:11:12.615418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.572 [2024-11-10 00:11:12.615453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.572 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.615582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.615641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.615788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.615823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.615932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.615965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.616063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.616096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.616205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.616239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.616358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.616391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.616527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.616561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.616718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.616759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.616877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.616910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.617977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.618124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.618157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.618265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.618297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.618409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.618441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.618548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.618581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.618732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.618765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.618918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.618952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.619062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.619097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.619206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.619240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.619374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.619407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.619533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.619596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.619773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.619810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.619931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.619966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.620110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.620144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.620280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.620313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.620442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.620478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.620645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.620680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.620791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.620824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.620967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.573 [2024-11-10 00:11:12.621001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.573 qpair failed and we were unable to recover it. 00:37:46.573 [2024-11-10 00:11:12.621147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.621181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.621289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.621322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.621421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.621455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.621583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.621639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.621813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.621849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.622020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.622054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.622168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.622203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.622367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.622400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.622542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.622598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.622723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.622758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.622879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.622913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.623047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.623080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.623208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.623242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.623336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.623374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.623510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.623545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.623690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.623738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.623894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.623934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.624071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.624105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.624240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.624273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.624410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.624444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.624581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.624626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.624765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.624801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.624944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.624978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.625112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.625146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.625263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.625298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.625425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.625469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.625593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.625629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.625820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.625855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.625976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.626008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.626167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.626301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.626333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.626476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.626509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.626636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.626672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.626842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.574 [2024-11-10 00:11:12.626879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.574 qpair failed and we were unable to recover it. 00:37:46.574 [2024-11-10 00:11:12.627010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.627044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.627178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.627213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.627320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.627355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.627489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.627523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.627643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.627678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.627797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.627832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.627977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.628010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.628162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.628196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.628331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.628371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.628484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.628518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.628667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.628702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.628841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.628876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.628981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.629013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.629150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.629185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.629319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.629353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.629511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.629545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.629691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.629726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.629830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.629873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.630011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.630045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.630202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.630240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.630374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.630408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.630541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.630581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.630698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.630732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.630877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.630911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.631941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.631975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.632104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.632137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.632281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.575 [2024-11-10 00:11:12.632316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.575 qpair failed and we were unable to recover it. 00:37:46.575 [2024-11-10 00:11:12.632452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.632485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.632599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.632633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.632740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.632775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.632909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.632942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.633110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.633145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.633290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.633323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.633460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.633494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.633606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.633642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.633778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.633826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.633976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.634012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.634164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.634198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.634334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.634369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.634482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.634516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.634669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.634705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.634819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.634853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.634989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.635167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.635339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.635482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.635658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.635813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.635953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.635986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.636122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.636154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.636273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.636306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.636417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.636450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.636578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.636621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.636760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.636798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.636908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.636942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.637077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.637113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.637252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.637285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.576 qpair failed and we were unable to recover it. 00:37:46.576 [2024-11-10 00:11:12.637446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.576 [2024-11-10 00:11:12.637479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.637580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.637621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.637757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.637791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.637892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.637926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.638088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.638122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.638260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.638293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.638401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.638433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.638532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.638564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.638754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.638803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.638943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.638978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.639128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.639163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.639277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.639312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.639445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.639479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.639594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.639630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.639765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.639798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.639901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.639935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.640064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.640099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.640229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.640263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.640426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.640460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.640602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.640637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.640758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.640819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.641048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.641085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.641215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.641249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.641391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.641425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.641603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.641723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.641758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.641868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.641903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.642018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.642053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.642189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.642224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.642366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.642400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.642534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.642568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.642736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.642783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.642934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.642969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.643080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.643113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.643245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.643278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.577 [2024-11-10 00:11:12.643412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.577 [2024-11-10 00:11:12.643445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.577 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.643581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.643638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.643776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.643809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.643911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.643944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.644062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.644094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.644199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.644231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.644327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.644360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.644466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.644499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.644698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.644746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.644865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.644911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.645019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.645053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.645158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.645407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.645440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.645582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.645623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.645761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.645795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.645978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.646015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.646155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.646191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.646349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.646383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.646504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.646538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.646691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.646724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.646835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.646875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.646989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.647023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.647127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.647160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.647381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.647417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.647545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.647600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.647710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.647743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.647904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.648045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.648079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.648244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.648279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.648439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.648474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.648627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.648676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.648794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.648830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.649000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.649034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.649196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.649230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.649363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.649396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.578 qpair failed and we were unable to recover it. 00:37:46.578 [2024-11-10 00:11:12.649504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.578 [2024-11-10 00:11:12.649539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.649767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.649816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.649935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.649970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.650120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.650153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.650264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.650296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.650395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.650428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.650595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.650633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.650759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.650807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.650923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.650958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.651103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.651137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.651275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.651308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.651447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.651482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.651618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.651652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.651792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.651827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.651982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.652014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.652150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.652182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.652344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.652384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.652499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.652533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.652699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.652732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.652849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.652884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.653005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.653046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.653193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.653228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.653387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.653421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.653525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.653559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.653727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.653776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.653920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.653954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.654091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.654124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.654255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.654287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.654451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.654484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.654599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.654633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.654769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.654802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.654953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.654990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.655144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.655179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.655292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.655328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.655438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.655472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.655619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.579 [2024-11-10 00:11:12.655654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.579 qpair failed and we were unable to recover it. 00:37:46.579 [2024-11-10 00:11:12.655790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.655839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.655959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.655994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.656098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.656130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.656242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.656275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.656412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.656447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.656599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.656633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.656784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.656820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.656947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.656982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.657120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.657154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.657289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.657323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.657455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.657508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.657667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.657704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.657838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.657875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.657987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.658019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.658128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.658159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.658267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.658302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.658476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.658525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.658685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.658722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.658830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.658873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.659007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.659041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.659183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.659216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.659320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.659354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.659502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.659551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.659724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.659762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.659907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.659947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.660060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.660094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.660241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.660277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.660442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.660478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.660610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.660660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.660801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.660836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.660951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.660984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.661094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.661127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.661273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.661307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.661447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.661484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.661635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.661684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.661831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.661867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.662014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.662048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.662172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.662206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.662344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.580 [2024-11-10 00:11:12.662379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.580 qpair failed and we were unable to recover it. 00:37:46.580 [2024-11-10 00:11:12.662477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.662510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.662632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.662666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.662768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.662800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.662941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.662973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.663102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.663135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.663244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.663276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.663414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.663448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.663568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.663627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.663768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.663804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.663957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.663991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.664151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.664183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.664294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.664332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.664458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.664491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.664601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.664639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.664742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.664776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.664908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.664957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.665106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.665140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.665243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.665274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.665382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.665414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.665553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.665593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.665733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.665764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.665881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.665916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.666034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.666068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.666208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.666241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.666350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.666384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.666513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.666562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.666708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.666744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.666874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.666908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.667020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.667055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.667189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.667223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.667362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.667395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.581 [2024-11-10 00:11:12.667499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.581 [2024-11-10 00:11:12.667532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.581 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.667688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.667723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.667833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.667867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.668945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.668977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.669084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.669117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.669248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.669296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.669439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.669475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.669604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.669650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.669755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.669788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.669905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.669938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.670069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.670102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.670246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.670282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.670425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.670463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.670582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.670627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.670771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.670811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.670931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.671100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.671134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.671242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.671277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.671389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.671424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.671553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.671594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.582 qpair failed and we were unable to recover it. 00:37:46.582 [2024-11-10 00:11:12.671738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.582 [2024-11-10 00:11:12.671771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.671875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.671913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.672928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.672960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.673095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.673129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.673271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.673304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.673417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.673452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.673562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.673601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.673745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.673778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.673926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.673960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.674093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.674126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.674264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.674297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.674428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.674462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.674565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.674604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.674724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.674758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.674900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.674933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.675050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.675084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.675204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.675238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.675373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.675407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.675542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.675576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.675701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.675734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.675841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.675881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.676930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.676964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.677071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.677110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.677271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.677304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.677425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.677460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.677612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.677661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.677773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.677809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.677960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.677994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.678130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.678164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.678340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.678470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.678504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.678636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.678684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.678829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.678874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.583 [2024-11-10 00:11:12.679855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.583 [2024-11-10 00:11:12.679897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.583 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.680039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.680076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.680219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.680252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.680381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.680430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.680545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.680579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.680739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.680772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.680907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.680939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.681050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.681082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.681221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.681253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.681362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.681395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.681570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.681625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.681750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.681787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.681906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.681943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.682079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.682114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.682242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.682276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.682383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.682417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.682565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.682607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.682741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.682774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.682906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.682938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.683080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.683112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.683220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.683253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.683382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.683415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.683526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.683560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.683707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.683775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.683947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.683984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.684087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.684122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.684230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.684265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.684373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.684406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.684554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.684611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.684732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.684767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.684874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.684907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.685018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.685051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.685193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.685226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.685334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.685367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.685475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.685511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.685662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.685699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.685816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.685852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.686041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.686370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.686550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.686886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.686998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.687031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.687173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.687208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.687346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.687379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.687494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.687531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.687672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.687707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.687818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.687851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.687993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.688026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.688181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.688216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.688350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.688384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.688541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.688598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.688761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.688797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.688916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.688950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.689062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.689095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.689211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.689392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.689426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.689572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.584 [2024-11-10 00:11:12.689616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.584 qpair failed and we were unable to recover it. 00:37:46.584 [2024-11-10 00:11:12.689753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.689798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.689963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.689997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.690133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.690166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.690301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.690335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.690495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.690534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.690684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.690718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.690849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.690888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.690999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.691032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.691146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.691180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.691289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.691324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.691477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.691511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.691644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.691679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.691789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.691822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.691969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.692002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.692111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.692144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.692309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.692342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.692449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.692483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.692701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.692750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.692946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.692994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.693138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.693173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.693279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.693313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.693449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.693482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.693600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.693644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.693771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.693803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.693926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.693959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.694092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.694125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.694276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.694310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.694432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.694480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.694604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.694644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.694781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.694817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.694977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.695011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.695159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.695197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.695364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.695398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.695539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.695573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.695726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.695758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.695871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.695911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.696052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.696217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.696366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.696534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.696706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.696874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.696982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.697017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.697127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.697160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.697306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.697340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.697521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.697569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.697712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.697748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.697896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.697930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.698043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.698075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.698173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.698205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.698322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.698358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.698506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.698541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.698709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.698744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.698875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.585 [2024-11-10 00:11:12.698908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.585 qpair failed and we were unable to recover it. 00:37:46.585 [2024-11-10 00:11:12.699011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.699044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.699175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.699208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.699371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.699406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.699595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.699643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.699796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.699831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.700005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.700039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.700178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.700212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.700319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.700352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.700486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.700520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.700686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.700734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.700875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.700915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.701034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.701067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.701167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.701200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.701335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.701367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.701480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.701512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.701669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.701705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.701841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.701876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.702051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.702243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.702412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.702593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.702737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.702886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.702990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.703022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.703135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.703168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.703303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.703456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.703490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.703622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.703659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.703764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.703798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.703952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.704144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.704313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.704454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.704620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.704761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.704905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.704936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.705079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.705247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.705393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.705564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.705713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.705881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.705994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.706314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.706488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.706643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.706782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.706929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.706971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.707123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.707156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.707289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.586 [2024-11-10 00:11:12.707322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.586 qpair failed and we were unable to recover it. 00:37:46.586 [2024-11-10 00:11:12.707438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.707470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.707637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.707670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.707784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.707820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.707962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.708166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.708312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.708452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.708607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.708758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.708925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.877 [2024-11-10 00:11:12.708957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.877 qpair failed and we were unable to recover it. 00:37:46.877 [2024-11-10 00:11:12.709071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.709103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.709218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.709250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.709362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.709394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.709503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.709536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.709654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.709686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.709820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.709853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.709985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.710018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.710152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.710185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.710322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.710355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.710466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.710498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.710624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.710663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.710810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.710845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.710993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.711142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.711285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.711450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.711634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.711782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.711950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.711982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.712085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.712117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.712230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.712267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.712401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.712435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.712545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.712580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.712744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.712779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.712919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.712954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.713092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.713126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.713228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.713261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.713398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.713430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.713542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.713579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.713712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.713747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.713881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.713929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.714066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.714100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.714240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.714381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.714417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.714583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.714633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.714779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.714882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.714920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.715083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.715118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.715221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.715258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.715391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.715424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.715559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.715598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.715729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.715763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.715865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.715899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.716121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.878 [2024-11-10 00:11:12.716155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.878 qpair failed and we were unable to recover it. 00:37:46.878 [2024-11-10 00:11:12.716292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.716325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.716433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.716467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.716622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.716671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.716805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.716839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.717130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.717270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.717439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.717582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.717789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.717945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.717980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.718120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.718155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.718274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.718311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.718424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.718489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.718664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.718698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.718810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.718843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.718957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.718989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.719127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.719162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.719303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.719338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.719450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.719484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.719704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.719738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.719917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.719951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.720089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.720155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.720274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.720309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.720423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.720458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.720605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.720650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.720795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.720829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.720935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.720969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.721102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.721137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.721241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.721276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.721405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.721440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.721546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.721578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.721750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.721791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.721900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.721934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.722031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.722065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.722198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.722242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.722351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.722386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.722514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.722548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.722691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.722739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.722846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.722889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.723020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.723054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.723175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.723210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.723370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.723404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.723536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.723570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.879 qpair failed and we were unable to recover it. 00:37:46.879 [2024-11-10 00:11:12.723695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.879 [2024-11-10 00:11:12.723730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.723864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.723915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.724958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.724992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.725111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.725144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.725300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.725433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.725466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.725577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.725625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.725746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.725780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.726004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.726037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.726151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.726184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.726342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.726375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.726507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.726540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.726662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.726697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.726874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.726931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.727073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.727108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.727220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.727253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.727345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:46.880 [2024-11-10 00:11:12.727358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.727391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.727526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.727558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.727751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.727787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.727923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.727958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.728090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.728124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.728288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.728322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.728492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.728525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.728639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.728674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.728784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.728819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.728932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.728979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.729148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.729285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.729319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.729458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.729491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.729635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.729669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.729778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.729814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.729939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.729973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.730138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.730172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.730302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.730336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.730463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.730511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.730625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.730668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.730781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.730816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.730951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.730984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.731144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.880 [2024-11-10 00:11:12.731177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.880 qpair failed and we were unable to recover it. 00:37:46.880 [2024-11-10 00:11:12.731314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.731347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.731459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.731492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.731622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.731663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.731802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.731835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.731957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.731992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.732130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.732163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.732271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.732304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.732410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.732444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.732551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.732584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.732725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.732772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.732896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.732931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.733065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.733098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.733200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.733234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.733344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.733380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.733649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.733684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.733790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.733824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.733935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.733969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.734105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.734139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.734272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.734305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.734437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.734471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.734591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.734626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.734768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.734801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.734906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.734940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.735052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.735087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.735222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.735255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.735365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.735398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.735564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.735613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.735723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.735757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.735894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.735938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.736047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.736081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.736241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.736289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.736408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.736605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.736649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.736800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.736835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.736980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.737014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.737149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.881 [2024-11-10 00:11:12.737183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.881 qpair failed and we were unable to recover it. 00:37:46.881 [2024-11-10 00:11:12.737317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.737355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.737528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.737565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.737696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.737730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.737844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.737880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.738021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.738055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.738163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.738355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.738389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.738524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.738559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.738683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.738717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.738848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.738904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.739087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.739262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.739400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.739549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.739715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.739869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.739972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.740007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.740141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.740175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.740322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.740356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.740486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.740533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.740697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.740733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.740852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.740887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.741009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.741043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.741146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.741180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.741323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.741358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.741490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.741538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.741668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.741703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.741845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.741879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.742090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.742223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.742393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.742537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.742703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.742849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.742966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.743114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.743290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.743427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.743569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.743757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.743935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.743975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.744142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.744176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.744331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.744364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.744480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.744515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.744666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.882 [2024-11-10 00:11:12.744701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.882 qpair failed and we were unable to recover it. 00:37:46.882 [2024-11-10 00:11:12.744809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.744843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.744987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.745021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.745174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.745208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.745313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.745358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.745479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.745528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.745682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.745718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.745830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.745865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.745989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.746024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.746140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.746174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.746318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.746351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.746460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.746494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.746710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.746744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.746869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.746914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.747017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.747049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.747208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.747240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.747377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.747410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.747519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.747551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.747693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.747726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.747835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.747870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.748052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.748187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.748510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.748689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.748836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.748976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.749011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.749112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.749143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.749277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.749312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.749422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.749456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.749569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.749608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.749836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.749869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.750020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.750054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.750191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.750225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.750334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.750368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.750489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.750538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.750736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.750779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.750893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.750927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.751035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.751217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.751250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.751356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.751391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.751500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.751534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.751681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.751715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.751826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.751858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.752002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.752035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.883 [2024-11-10 00:11:12.752142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.883 [2024-11-10 00:11:12.752174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.883 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.752287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.752319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.752471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.752520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.752640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.752677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.752788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.752824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.753033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.753175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.753315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.753671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.753828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.753969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.754121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.754263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.754420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.754591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.754734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.754883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.754917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.755055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.755088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.755199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.755232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.755397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.755432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.755652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.755686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.755805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.755840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.755943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.755976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.756104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.756137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.756243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.756277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.756409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.756458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.756581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.756624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.756764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.756799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.756906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.756940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.757101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.757135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.757270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.757314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.757451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.757485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.757623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.757656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.757788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.757820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.757924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.757957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.758103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.758135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.758236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.758268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.758374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.758409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.758553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.758611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.758771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.758820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.758978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.759012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.759108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.759140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.759277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.759308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.759436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.759468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.884 [2024-11-10 00:11:12.759619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.884 [2024-11-10 00:11:12.759653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.884 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.759757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.759791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.759903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.759941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.760081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.760116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.760252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.760285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.760394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.760427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.760596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.760630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.760740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.760774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.760890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.760925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.761065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.761206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.761381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.761522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.761711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.761888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.761996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.762163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.762297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.762431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.762577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.762722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.762867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.762900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.763039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.763075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.763193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.763228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.763346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.763379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.763511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.763545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.763700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.763740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.763873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.763919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.764061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.764197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.764369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.764573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.764726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.764863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.764974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.765007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.765169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.765202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.765338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.765372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.765508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.765542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.765665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.765699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.765834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.765867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.766019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.885 [2024-11-10 00:11:12.766052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.885 qpair failed and we were unable to recover it. 00:37:46.885 [2024-11-10 00:11:12.766174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.766220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.766336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.766523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.766572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.766704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.766738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.766884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.766917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.767022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.767055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.767216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.767250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.767378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.767412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.767526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.767561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.767706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.767754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.767895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.767931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.768054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.768087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.768230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.768263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.768402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.768436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.768547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.768583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.768733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.768767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.768880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.768915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.769949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.769983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.770094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.770128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.770269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.770303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.770416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.770451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.770605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.770653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.770768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.770804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.770961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.771009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.771131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.771168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.771306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.771340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.771470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.771504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.771633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.771668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.771778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.771813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.771978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.772124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.772271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.772411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.772571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.772719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.772882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.772914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.773041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.773073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.773176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.886 [2024-11-10 00:11:12.773208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.886 qpair failed and we were unable to recover it. 00:37:46.886 [2024-11-10 00:11:12.773313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.773344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.773479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.773514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.773633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.773667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.773808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.773843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.774962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.774996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.775131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.775165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.775289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.775324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.775467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.775512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.775650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.775685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.775818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.775851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.775989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.776022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.776127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.776162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.776295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.776329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.776460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.776509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.776679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.776716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.776850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.776898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.777026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.777061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.777174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.777207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.777320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.777355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.777519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.777552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.777694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.777728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.777839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.777874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.778895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.778928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.779093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.779125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.779261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.779297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.779406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.779441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.779582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.779624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.779758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.779791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.779923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.779956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.780065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.780099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.780232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.780266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.780400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.780513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.887 [2024-11-10 00:11:12.780548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.887 qpair failed and we were unable to recover it. 00:37:46.887 [2024-11-10 00:11:12.780689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.780725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.780866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.780900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.781071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.781105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.781214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.781252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.781359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.781394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.781525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.781558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.781701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.781736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.781859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.781894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.782035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.782180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.782327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.782490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.782667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.782864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.782972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.783115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.783438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.783600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.783769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.783904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.783937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.784043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.784075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.784219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.784250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.784355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.784387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.784524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.784557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.784703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.784739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.784836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.784870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.785004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.785038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.785206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.785240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.785353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.785386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.785543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.785597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.785746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.785784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.785921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.785955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.786063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.786099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.786240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.786276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.786411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.786444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.786609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.786657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.786770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.786805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.786972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.787007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.787165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.787198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.787298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.787332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.787446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.787479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.787619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.787654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.787810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.787864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.888 [2024-11-10 00:11:12.787981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.888 [2024-11-10 00:11:12.788015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.888 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.788218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.788251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.788352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.788387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.788493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.788527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.788645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.788681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.788826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.788860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.788963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.788995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.789093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.789125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.789267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.789300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.789399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.789432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.789532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.789564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.789686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.789723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.789854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.789889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.790064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.790210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.790358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.790533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.790690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.790833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.790968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.791138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.791286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.791436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.791605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.791744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.791881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.791913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.792955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.792987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.793092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.793123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.793228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.793275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.793415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.793450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.793563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.793602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.793714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.793749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.889 [2024-11-10 00:11:12.793862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.889 [2024-11-10 00:11:12.793897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.889 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.794945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.794978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.795111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.795248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.795386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.795527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.795665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.795826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.795997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.796030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.796170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.796202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.796308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.796340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.796472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.796504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.796724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.796760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.796927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.796961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.797103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.797136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.797247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.797288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.797424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.797458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.797590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.797624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.797758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.797791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.797934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.797967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.798102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.798134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.798268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.798301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.798415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.798448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.798584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.798632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.798941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.798974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.799144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.799265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.799297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.799410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.799445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.799557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.799597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.799739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.890 [2024-11-10 00:11:12.799772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.890 qpair failed and we were unable to recover it. 00:37:46.890 [2024-11-10 00:11:12.799869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.799903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.800012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.800046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.800181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.800215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.800353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.800388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.800495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.800527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.800677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.800852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.800885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.801014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.801046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.801181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.801213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.801349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.801386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.801520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.801553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.801676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.801710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.801819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.801852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.802017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.802051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.802265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.802298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.802436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.802469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.802604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.802648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.802789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.802822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.802960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.802991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.803126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.803158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.803261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.803293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.803402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.803438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.803617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.803650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.803777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.803810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.803955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.803988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.804096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.804128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.804261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.804292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.804400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.804433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.804547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.804581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.804726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.804758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.804881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.804916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.805016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.805061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.805198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.805232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.805368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.805402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.805562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.805625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.805770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.805817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.805940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.805974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.806086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.891 [2024-11-10 00:11:12.806119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.891 qpair failed and we were unable to recover it. 00:37:46.891 [2024-11-10 00:11:12.806253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.806285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.806414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.806446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.806552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.806593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.806719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.806767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.806892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.806929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.807966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.807999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.808109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.808142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.808262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.808309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.808480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.808515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.808624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.808659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.808790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.808823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.808928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.808961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.809111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.809158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.809273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.809308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.809425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.809473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.809603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.809645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.809790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.809823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.809929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.809963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.810070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.810103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.810247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.810283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.810438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.810486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.810651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.810700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.810812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.810847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.810983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.811115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.811252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.811402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.811578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.811726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.811895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.811928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.812084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.812116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.812266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.812299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.812398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.812430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.892 qpair failed and we were unable to recover it. 00:37:46.892 [2024-11-10 00:11:12.812551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.892 [2024-11-10 00:11:12.812596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.812722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.812758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.812873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.812917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.813938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.813970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.814104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.814135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.814245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.814281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.814396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.814431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.814564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.814606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.814716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.814750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.814880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.814913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.815018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.815051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.815212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.815246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.815351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.815386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.815527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.815562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.815693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.815728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.815866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.815909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.816019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.816053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.816215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.816249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.816382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.816415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.816544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.816599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.816764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.816799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.816906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.816939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.817075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.817217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.817397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.817549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.817726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.817861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.817992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.818024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.818165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.818197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.818334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.818368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.818500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.818548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.818697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.818746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.893 qpair failed and we were unable to recover it. 00:37:46.893 [2024-11-10 00:11:12.818866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.893 [2024-11-10 00:11:12.818900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.819064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.819193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.819331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.819484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.819833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.819973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.820007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.820170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.820204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.820336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.820376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.820519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.820553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.820694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.820742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.820871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.820919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.821067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.821103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.821270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.821305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.821440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.821474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.821646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.821695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.821816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.821851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.821995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.822029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.822195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.822228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.822347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.822396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.822543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.822603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.822721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.822756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.822877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.822912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.823055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.823226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.823377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.823553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.823724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.823870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.823978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.824011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.824175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.824209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.824344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.824378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.824492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.824528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.824675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.824724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.824876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.824911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.825081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.825117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.825258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.825292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.825431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.825465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.825603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.825638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.825804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.825930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.825978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.826122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.826156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.894 qpair failed and we were unable to recover it. 00:37:46.894 [2024-11-10 00:11:12.826261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.894 [2024-11-10 00:11:12.826293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.826456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.826489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.826622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.826655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.826777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.826825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.826975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.827010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.827120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.827153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.827284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.827322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.827428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.827463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.827623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.827657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.827791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.827825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.827969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.828002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.828135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.828168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.828298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.828331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.828469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.828503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.828629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.828677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.828810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.828858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.828973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.829117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.829281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.829446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.829623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.829772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.829953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.829987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.830120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.830153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.830286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.830318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.830440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.830487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.830649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.830684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.830805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.830853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.830967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.831115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.831279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.831415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.831575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.831783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.831922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.831957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.832068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.832102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.832210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.832243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.832363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.832412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.832534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.832582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.832711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.832748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.832882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.832916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.833026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.833059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.833199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.833232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.833367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.833401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.895 qpair failed and we were unable to recover it. 00:37:46.895 [2024-11-10 00:11:12.833559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.895 [2024-11-10 00:11:12.833614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.833732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.833769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.833872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.833912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.834060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.834093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.834223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.834255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.834365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.834400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.834508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.834543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.834701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.834750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.834898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.834932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.835074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.835106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.835218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.835251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.835411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.835443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.835548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.835579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.835722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.835755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.835874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.835910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.836078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.836235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.836283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.836427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.836460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.836622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.836657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.836768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.836800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.836938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.836971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.837076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.837107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.837218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.837251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.837387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.837424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.837575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.837632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.837790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.837839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.838007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.838041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.838165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.838198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.838295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.838327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.838445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.838481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.838705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.838752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.838909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.838957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.839126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.839162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.839325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.839423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.839456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.839564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.839606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.839759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.839807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.839964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.840012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.840156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.840191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.840332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.840365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.896 qpair failed and we were unable to recover it. 00:37:46.896 [2024-11-10 00:11:12.840473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.896 [2024-11-10 00:11:12.840507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.840646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.840694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.840851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.840904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.841023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.841060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.841191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.841224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.841328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.841362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.841492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.841524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.841691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.841740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.841854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.841891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.842951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.842985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.843152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.843185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.843319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.843352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.843479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.843528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.843645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.843680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.843834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.843867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.844952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.844985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.845089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.845122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.845287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.845322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.845462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.845495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.845608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.845642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.845750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.845783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.845915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.845963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.846137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.846173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.846312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.846345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.846451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.846484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.846610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.846644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.846792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.846827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.897 qpair failed and we were unable to recover it. 00:37:46.897 [2024-11-10 00:11:12.846939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.897 [2024-11-10 00:11:12.846971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.847079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.847112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.847212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.847245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.847361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.847399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.847527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.847576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.847706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.847740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.847868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.847916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.848064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.848097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.848209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.848241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.848356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.848500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.848534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.848647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.848681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.848785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.848817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.849039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.849077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.849246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.849281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.849416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.849449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.849556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.849596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.849840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.849888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.850953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.850986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.851120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.851165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.851294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.851327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.851439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.851475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.851602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.851651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.851768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.851803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.851912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.851946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.852078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.852110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.852266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.852299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.852407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.852442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.852552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.852726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.852773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.852956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.853068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.853102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.898 [2024-11-10 00:11:12.853207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.898 [2024-11-10 00:11:12.853241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.898 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.853348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.853383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.853484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.853517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.853628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.853662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.853771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.853804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.853925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.853974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.854124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.854159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.854267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.854302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.854446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.854480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.854712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.854762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.854876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.854911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.855893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.855926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.856954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.856989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.857106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.857144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.857301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.857337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.857474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.857522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.857643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.857679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.857802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.857836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.857942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.857975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.858080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.858113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.858250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.858291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.858411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.858446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.858551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.858584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.858740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.858773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.858893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.858941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.859113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.859148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.859261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.859294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.859446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.859480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.859609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.859658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.859800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.859834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.859946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.859980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.860154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.860188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.899 [2024-11-10 00:11:12.860304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.899 [2024-11-10 00:11:12.860337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.899 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.860489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.860537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.860671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.860706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.860822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.860858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.860999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.861035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.861173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.861208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.861344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.861377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.861531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.861580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.861737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.861848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.861882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.862050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.862220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.862352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.862531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.862720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.862870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.862983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.863125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.863261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.863435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.863577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.863766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.863948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.863982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.864122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.864288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.864449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.864593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.864811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.900 [2024-11-10 00:11:12.864924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.900 [2024-11-10 00:11:12.864928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.864948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.900 [2024-11-10 00:11:12.864962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 [2024-11-10 00:11:12.864971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.864989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.900 [2024-11-10 00:11:12.865100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.865133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.865247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.865280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.865431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.865479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.865612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.865660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.865790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.865838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.865983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.866018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.866128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.866163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.866280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.866314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.866453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.866488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.866641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.866689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.866810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.866847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.866992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.867025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.867130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.867162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.900 [2024-11-10 00:11:12.867298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.900 [2024-11-10 00:11:12.867331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.900 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.867447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.867482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.867643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.867679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.867709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:46.901 [2024-11-10 00:11:12.867745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:46.901 [2024-11-10 00:11:12.867793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:46.901 [2024-11-10 00:11:12.867836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.867799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:46.901 [2024-11-10 00:11:12.867872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.868036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.868069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.868179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.868214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.868326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.868360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.868492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.868539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.868695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.868742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.868866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.869926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.869975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.870095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.870131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.870255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.870289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.870442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.870476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.870612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.870647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.870775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.870822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.870946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.870980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.871120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.871266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.871407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.871545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.871696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.871861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.871976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.872012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.872151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.872185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.872286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.872320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.872426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.872460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.872596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.872644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.872774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.872822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.872974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.873009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.873121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.901 [2024-11-10 00:11:12.873161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.901 qpair failed and we were unable to recover it. 00:37:46.901 [2024-11-10 00:11:12.873280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.873314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.873423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.873457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.873560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.873599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.873729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.873777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.873914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.873961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.874102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.874259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.874294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.874410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.874443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.874578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.874627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.874733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.874766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.874878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.874915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.875020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.875054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.875166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.875203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.875328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.875363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.875496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.875544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.875698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.875732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.875852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.875885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.876016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.876048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.876204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.876237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.876349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.876381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.876504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.876553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.876690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.876738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.876920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.877067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.877220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.877366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.877545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.877698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.877871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.877991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.878149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.878289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.878468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.878640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.878784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.878920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.878952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.879071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.879103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.879247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.879373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.879408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.902 qpair failed and we were unable to recover it. 00:37:46.902 [2024-11-10 00:11:12.879546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.902 [2024-11-10 00:11:12.879594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.879710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.879744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.879852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.879884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.880954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.880986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.881097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.881129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.881236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.881269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.881386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.881421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.881548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.881605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.881745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.881780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.881937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.881971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.882076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.882109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.882243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.882275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.882387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.882420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.882554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.882615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.882743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.882790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.882906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.882942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.883053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.883087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.883293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.883327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.883437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.883469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.883577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.883619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.883722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.883755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.883873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.883911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.884087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.884121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.884231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.884264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.884367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.884399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.884517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.884553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.884708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.884756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.884911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.884955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.885059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.885093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.885209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.885243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.885347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.885381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.885514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.885563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.885724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.885771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.885881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.885918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.886062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.886104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.886218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.886252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.886382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.886415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.886528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.886563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.886702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.903 [2024-11-10 00:11:12.886750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.903 qpair failed and we were unable to recover it. 00:37:46.903 [2024-11-10 00:11:12.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.886921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.887071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.887211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.887356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.887522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.887711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.887860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.887976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.888121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.888297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.888440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.888596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.888767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.888915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.888949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.889058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.889092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.889231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.889265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.889393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.889441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.889617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.889651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.889777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.889825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.889938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.889972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.890084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.890118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.890260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.890294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.890435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.890472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.890578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.890621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.890733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.890766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.890874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.890907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.891904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.891938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.892080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.892231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.892400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.892543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.892703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.892872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.892976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.893010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.893117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.893150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.893258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.893292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.893422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.893469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.893578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.893621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.893783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.904 [2024-11-10 00:11:12.893818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.904 qpair failed and we were unable to recover it. 00:37:46.904 [2024-11-10 00:11:12.893955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.894022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.894135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.894169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.894281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.894316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.894467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.894516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.894666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.894714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.894834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.894869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.895956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.895988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.896093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.896126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.896237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.896276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.896402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.896450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.896570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.896632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.896746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.896895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.896928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.897960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.897991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.898099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.898131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.898267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.898299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.898430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.898478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.898605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.898643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.898763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.898805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.898911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.899053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.899088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.899196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.899230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.899340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.899374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.899540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.899598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.899761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.899797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.899906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.899940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.900070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.900105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.900236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.900270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.900383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.900419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.900528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.900564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.900705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.900737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.900862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.905 [2024-11-10 00:11:12.900897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.905 qpair failed and we were unable to recover it. 00:37:46.905 [2024-11-10 00:11:12.901042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.901087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.901196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.901229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.901360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.901393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.901536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.901570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.901695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.901742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.901852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.901886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.901985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.902121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.902255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.902389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.902553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.902749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.902906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.902954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.903079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.903121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.903234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.903272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.903381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.903415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.903554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.903597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.903705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.903738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.903838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.903873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.904047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.904234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.904406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.904580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.904742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.904894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.904999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.905135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.905284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.905421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.905577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.905752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.905939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.905987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.906955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.906992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.907116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.906 [2024-11-10 00:11:12.907164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.906 qpair failed and we were unable to recover it. 00:37:46.906 [2024-11-10 00:11:12.907309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.907343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.907444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.907475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.907584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.907627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.907738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.907771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.907874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.907908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.908943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.908977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.909086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.909119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.909225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.909264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.909381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.909429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.909558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.909615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.909728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.909765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.909908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.909941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.910080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.910214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.910345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.910508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.910697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.910843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.910989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.911125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.911445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.911631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.911802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.911943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.911975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.912080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.912113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.912243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.912275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.912393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.912429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.912613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.912648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.912769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.912817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.912963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.912999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.913124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.913171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.913288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.913323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.913456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.913505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.913635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.913671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.913786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.913820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.913923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.913957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.914058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.914091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.914225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.907 [2024-11-10 00:11:12.914259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.907 qpair failed and we were unable to recover it. 00:37:46.907 [2024-11-10 00:11:12.914365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.914400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.914517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.914564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.914690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.914728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.914833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.915937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.915970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.916961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.916994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.917125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.917276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.917416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.917713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.917844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.917976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.918153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.918299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.918453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.918622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.918764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.918938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.918973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.919095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.919129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.919256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.919290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.919400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.919435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.919552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.919607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.919736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.919770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.919879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.919915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.920962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.920994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.908 [2024-11-10 00:11:12.921109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.908 [2024-11-10 00:11:12.921141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.908 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.921242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.921274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.921389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.921422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.921526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.921559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.921703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.921746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.921903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.921950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.922969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.923925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.923958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.924101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.924246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.924383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.924523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.924661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.924832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.924959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.925883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.925998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.926150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.926324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.926458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.926598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.926744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.926876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.926908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.909 qpair failed and we were unable to recover it. 00:37:46.909 [2024-11-10 00:11:12.927008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.909 [2024-11-10 00:11:12.927041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.927159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.927193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.927319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.927367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.927478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.927514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.927650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.927688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.927796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.927828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.927926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.927958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.928090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.928123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.928228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.928262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.928380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.928570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.928616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.928756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.928791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.928893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.928925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.929067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.929218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.929391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.929529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.929706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.929854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.929991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.930127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.930291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.930428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.930595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.930759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.930918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.930966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.931114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.931151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.931261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.931295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.931397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.931430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.931545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.931579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.931707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.931756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.931875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.932922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.932954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.933057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.933089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.933194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.933226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.933370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.933418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.933539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.933575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.910 qpair failed and we were unable to recover it. 00:37:46.910 [2024-11-10 00:11:12.933696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.910 [2024-11-10 00:11:12.933730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.933839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.933878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.933988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.934133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.934274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.934455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.934623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.934764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.934895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.934929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.935101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.935243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.935383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.935560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.935727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.935876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.935987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.936132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.936270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.936400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.936542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.936691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.936837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.936872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.937950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.937985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.938954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.938987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.939090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.939124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.939231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.939267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.939417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.939467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.939583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.939623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.939737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.939769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.911 [2024-11-10 00:11:12.939867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.911 [2024-11-10 00:11:12.939907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.911 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.940040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.940073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.940196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.940232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.940354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.940389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.940535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.940710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.940747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.940860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.940896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.941939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.941973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.942958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.942990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.943134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.943168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.943277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.943311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.943416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.943451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.943565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.943607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.943722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.943769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.943881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.943914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.944059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.944211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.944363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.944513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.944665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.944839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.944971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.945108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.945252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.945419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.945599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.945756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.945929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.945962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.946066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.946109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.946243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.946276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.946396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.946431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.946550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.946606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.946736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.946784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.912 qpair failed and we were unable to recover it. 00:37:46.912 [2024-11-10 00:11:12.946921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.912 [2024-11-10 00:11:12.946955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.947065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.947099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.947205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.947239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.947371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.947404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.947545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.947580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.947726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.947774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.947896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.947932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.948957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.948991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.949126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.949263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.949415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.949569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.949734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.949880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.949986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.950127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.950272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.950416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.950575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.950747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.950900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.950935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.951071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.951221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.951362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.951506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.951729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.951878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.951981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.952116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.952284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.952600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.952758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.952921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.913 [2024-11-10 00:11:12.952955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.913 qpair failed and we were unable to recover it. 00:37:46.913 [2024-11-10 00:11:12.953072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.953106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.953245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.953279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.953383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.953417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.953538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.953572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.953690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.953724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.953831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.953867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.953983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.954120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.954256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.954394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.954539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.954699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.954854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.954902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.955082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.955238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.955409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.955549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.955723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.955858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.955971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.956151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.956289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.956428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.956574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.956723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.956863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.956896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.957913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.957947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.958966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.958999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.959183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.959216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.959359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.959394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.959508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.959542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.959660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.914 [2024-11-10 00:11:12.959693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.914 qpair failed and we were unable to recover it. 00:37:46.914 [2024-11-10 00:11:12.959798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.959831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.959939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.959972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.960082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.960114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.960218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.960253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.960375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.960433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.960600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.960666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.960784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.960818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.960923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.960955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.961097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.961129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.961239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.961271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.961406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.961455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.961591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.961639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.961774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.961821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.961936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.961970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.962080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.962112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.962225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.962257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.962367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.962516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.962555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.962690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.962738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.962859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.962896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.963940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.963976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.964089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.964122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.964278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.964312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.964424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.964457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.964580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.964638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.964750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.964790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.964899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.964934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.965101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.965238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.965390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.965536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.965725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.965876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.965985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.966018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.966131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.966164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.966305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.966338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.966452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.966487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.966627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.966661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.966775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.915 [2024-11-10 00:11:12.966812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.915 qpair failed and we were unable to recover it. 00:37:46.915 [2024-11-10 00:11:12.966924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.966959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.967101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.967247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.967392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.967547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.967694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.967838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.967984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.968120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.968260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.968421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.968582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.968733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.968879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.968913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.969909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.969942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.970067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.970099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.970216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.970249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.970357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.970390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.970512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.970546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.970681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.970716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.970831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.970871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.971920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.971953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.972093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.972278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.972430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.972576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.972725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.972867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.972978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.973011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.973120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.973154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.973266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.973301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.973438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.973487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.916 [2024-11-10 00:11:12.973644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.916 [2024-11-10 00:11:12.973682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.916 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.973790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.973839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.973987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.974129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.974271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.974414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.974581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.974733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.974909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.974942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.975056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.975090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.975230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.975265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.975391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.975425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.975544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.975599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.975732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.975768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.975904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.975938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.976051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.976085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.976228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.976262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.976371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.976406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.976532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.976580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.976708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.976743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.976853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.976886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.977935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.977967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.978072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.978106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.978224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.978257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.978368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.978403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.978520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.978558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.978691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.978739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.978858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.978893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.979041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.979177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.979357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.979532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.979686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.979833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.979968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.980001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.980104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.917 [2024-11-10 00:11:12.980138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.917 qpair failed and we were unable to recover it. 00:37:46.917 [2024-11-10 00:11:12.980258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.980306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.980440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.980488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.980630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.980679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.980798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.980833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.980942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.980976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.981112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.981146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.981255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.981290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.981423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.981471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.981595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.981633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.981769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.981802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.981908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.981940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.982107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.982247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.982391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.982571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.982734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.982874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.982989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.983123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.983261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.983397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.983562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.983749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.983909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.983957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.984076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.984215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.984250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.984353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.984387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.918 qpair failed and we were unable to recover it. 00:37:46.918 [2024-11-10 00:11:12.984500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.918 [2024-11-10 00:11:12.984535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.984659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.984696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.984832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.984879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.984995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.985133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.985271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.985437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.985610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.985777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.985940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.985975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.986081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.986114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.986222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.986255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.986378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.986427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.986542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.986577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.986728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.986775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.986888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.986923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.987061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.987205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.987352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.987526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.987691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.987866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.987970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.988105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.988248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.988382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.988542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.988716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.988857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.988890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.989027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.989060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.989174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.989207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.989318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.989352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.989467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.989515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.989657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.989706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.989834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.989868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.990038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.990218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.990360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.990526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.990714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.919 [2024-11-10 00:11:12.990875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.919 qpair failed and we were unable to recover it. 00:37:46.919 [2024-11-10 00:11:12.990980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.991127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.991293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.991447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.991616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.991780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.991953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.991988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.992108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.992141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.992243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.992277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.992391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.992426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.992551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.992606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.992721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.992757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.992886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.992919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.993960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.993999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.994115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.994162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.994277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.994311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.994425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.994472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.994595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.994631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.994735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.994769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.994879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.994913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.995050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.995226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.995402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.995600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.995753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.995891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.995995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.996028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.996138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.996173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.920 qpair failed and we were unable to recover it. 00:37:46.920 [2024-11-10 00:11:12.996293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.920 [2024-11-10 00:11:12.996328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.996487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.996535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.996655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.996690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.996797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.996830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.996932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.996964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.997074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.997106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.997240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.997274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.997396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.997562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.997620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.997736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.997770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.997907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.997939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.998945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.998980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.999085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.999119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.999226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.999260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.999365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.999400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.999526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.999574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.999707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.999743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:12.999846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:12.999880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.000888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.000922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.001088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.001231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.001402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.001578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.001738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.001882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.001986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.002019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.002156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.002189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.921 qpair failed and we were unable to recover it. 00:37:46.921 [2024-11-10 00:11:13.002303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.921 [2024-11-10 00:11:13.002338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.002486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.002533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.002653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.002688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.002794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.002827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.002925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.002958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.003060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.003093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.003201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.003236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.003356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.003405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.003560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.003616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.003732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.003766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.003877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.003910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.004931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.004965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.005943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.005976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.006108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.006142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.006256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.006294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.006400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.006434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.006617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.006665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.006796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.006843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.006955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.006990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.007127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.922 [2024-11-10 00:11:13.007162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.922 qpair failed and we were unable to recover it. 00:37:46.922 [2024-11-10 00:11:13.007261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.007294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.007422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.007470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.007620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.007655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.007768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.007806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.007916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.007949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.008054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.008088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.008236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.008272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.008403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.008436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.008582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.008622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.008733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.008768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.008880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.008916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.009911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.009943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.010958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.010993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.011098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.011131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.011236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.011270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.011395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.011443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.011600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.011649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.011771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.011805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.011912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.011947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.012084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.012118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.012229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.012263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.012399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.012433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.012563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.012623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.012741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.012779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.923 [2024-11-10 00:11:13.012885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.923 [2024-11-10 00:11:13.012920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.923 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.013898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.013932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.014096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.014232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.014375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.014530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.014709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.014854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.014997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.015160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.015290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.015626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.015775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.015918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.015952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.016085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.016119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.016250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.016284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.016402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.016437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.016557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.016611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.016730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.016766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.016880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.016915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.017054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.017226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.017366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.017543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.017736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.017876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.017978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.018112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.018255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.018415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.018581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.018743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.924 qpair failed and we were unable to recover it. 00:37:46.924 [2024-11-10 00:11:13.018928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.924 [2024-11-10 00:11:13.018962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.019949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.019982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.020118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.020151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.020258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.020293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.020405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.020441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.020603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.020652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.020773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.020809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.020927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.020961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.021955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.021988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.022141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.022244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.022279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.022402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.022451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.022569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.022613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.022724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.022771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.022883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.022916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.023883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.023915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.024179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.024310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.024461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.024615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.925 [2024-11-10 00:11:13.024759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.925 qpair failed and we were unable to recover it. 00:37:46.925 [2024-11-10 00:11:13.024871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.024903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.025873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.025905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.026039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.026071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.026178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.026210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.026318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.026352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.026462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.026497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.026720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.026756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.026874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.026908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.027914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.027947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.028965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.028998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.029128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.029266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.029407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.029549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.029701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.029869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.029978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.030012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.030125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.030159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.030264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.030297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.030400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.030435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.926 [2024-11-10 00:11:13.030537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.926 [2024-11-10 00:11:13.030572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.926 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.030758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.030878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.030912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.031917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.031952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.032065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.032100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.032211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.032245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.032349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.032384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.032540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.032595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.032706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.032857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.033008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.033043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.033156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.033189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.033449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.033482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.033599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.033634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.033760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.033808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.033924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.033958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.034092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.034125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.034227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.034260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.034407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.034442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.034551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.034584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.034709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.034744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.034850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.034884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.927 [2024-11-10 00:11:13.035026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.927 [2024-11-10 00:11:13.035059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.927 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.035166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.035199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.035331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.035365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.035489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.035538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.035674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.035722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.035844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.035878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.035983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.036155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.036288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.036440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.036606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.036759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.036900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.036934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.037036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.037072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.037198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.037231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.037331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.037363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.037508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.037557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.037692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.037740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.037882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.037919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.038951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.038985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.039115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.039155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.039266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.039298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.039404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.039436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.039562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.039601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.039706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.039738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.039845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.039894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.040906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.040954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.928 qpair failed and we were unable to recover it. 00:37:46.928 [2024-11-10 00:11:13.041073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.928 [2024-11-10 00:11:13.041109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.041229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.041265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.041378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.041412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.041532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.041580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.041714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.041761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.041886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.041931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.042041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.042076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.042218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.042252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.042395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.042431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.042555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.042610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.042731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.042769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.042895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.042931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.043076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.043218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.043378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.043517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.043704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.043863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.043987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.044023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.044139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.044186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.044305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.044340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.044539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.044593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.044725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.044761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.044875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.044908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.045018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.045051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.045149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.045182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.045283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.045318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.045428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.045464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.045670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.045713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.045844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.046009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.046044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.046159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:46.929 [2024-11-10 00:11:13.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:46.929 qpair failed and we were unable to recover it. 00:37:46.929 [2024-11-10 00:11:13.046302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.197 [2024-11-10 00:11:13.046336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.197 qpair failed and we were unable to recover it. 00:37:47.197 [2024-11-10 00:11:13.046451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.197 [2024-11-10 00:11:13.046486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.197 qpair failed and we were unable to recover it. 00:37:47.197 [2024-11-10 00:11:13.046594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.197 [2024-11-10 00:11:13.046629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.197 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.046729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.046764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.046863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.046897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.047036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.047192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.047337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.047490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.047651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.047834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.047969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.048117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.048264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.048455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.048618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.048762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.048951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.048995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.049099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.049135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.049266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.049301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.049437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.049473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.049633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.049671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.049787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.049827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.049947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.050084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.050117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.050227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.050259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.050363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.050395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.050510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.050547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.050710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.050746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.050865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.050906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.051009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.051043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.051185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.051219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.051356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.051394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.051529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.051577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.051736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.051770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.051889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.051922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.198 qpair failed and we were unable to recover it. 00:37:47.198 [2024-11-10 00:11:13.052032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.198 [2024-11-10 00:11:13.052068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.052185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.052219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.052329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.052365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.052499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.052533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.052682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.052731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.052852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.052888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.052991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.053121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.053294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.053440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.053610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.053757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.053915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.053965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.054109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.054147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.054292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.054331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.054450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.054487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.054633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.054668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.054783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.054818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.054931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.054966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.055103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.055252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.055395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.055566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.055729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.055882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.055986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.056021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.056130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.056171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.056310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.056344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.056445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.056480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.056604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.056660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.056806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.056847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.056982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.057027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.057165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.057270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.057305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.057414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.057447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.057557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.057602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.057730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.199 [2024-11-10 00:11:13.057766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.199 qpair failed and we were unable to recover it. 00:37:47.199 [2024-11-10 00:11:13.057884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.057928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.058034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.058069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.058180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.058215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.058355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.058389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.058495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.058530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.058680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.058846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.058883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.059006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.059041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.059152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.059188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.059311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.059362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.059511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.059547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.059720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.059758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.059886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.059931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.060065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.060100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.060200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.060235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.060396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.060431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.060544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.060583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.060772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.060902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.060972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.061083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.061119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.061230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.061266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.061372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.061406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.061512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.061547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.061715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.061764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.061892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.061936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.062229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.062374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.062507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.062659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.062861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.062993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.063029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.063143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.063177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.063285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.200 [2024-11-10 00:11:13.063320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.200 qpair failed and we were unable to recover it. 00:37:47.200 [2024-11-10 00:11:13.063423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.063456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.063599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.063645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.063759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.063791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.063893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.063939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.064053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.064088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.064203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.064236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.064354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.064405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.064565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.064653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.064782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.064831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.064986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.065130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.065314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.065457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.065599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.065741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.065900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.065939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.066053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.066094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.066208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.066248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.066353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.066389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.066518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.066553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.066711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.066749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.066868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.066905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.067059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.067224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.067366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.067715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.067875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.067994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.068029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.068131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.068166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.068300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.068334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.068443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.068479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.068608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.068668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.201 [2024-11-10 00:11:13.068784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.201 [2024-11-10 00:11:13.068820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.201 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.068946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.068981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.069096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.069131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.069246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.069283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.069385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.069421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.069549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.069585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.069736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.069771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.069925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.069976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.070094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.070132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.070252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.070288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.070421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.070457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.070600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.070662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.070785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.070821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.070941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.070974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.071137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.071170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.071281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.071316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.071441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.071491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.071608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.071650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.071799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.071865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.071975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.072113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.072276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.072434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.072617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.072767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.072939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.072975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.073078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.073113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.073252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.073287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.073399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.073436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.073538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.073579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.073708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.073742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.202 [2024-11-10 00:11:13.073840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.202 [2024-11-10 00:11:13.073874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.202 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.074014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.074049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.074185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.074235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.074391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.074429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.074535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.074570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.074749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.074784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.074898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.074933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.075082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.075132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.075248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.075285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.075399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.075434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.075543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.075578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.075701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.075735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.075884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.076062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.076198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.076403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.076554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.076727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.076873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.076980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.077152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.077310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.077472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.077657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.077813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.077963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.077997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.078134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.078167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.078275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.078308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.078421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.078456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.078557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.078600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.078720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.078753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.078872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.078909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.079023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.079058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.079156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.079191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.203 [2024-11-10 00:11:13.079297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.203 [2024-11-10 00:11:13.079332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.203 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.079457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.079507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.079656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.079705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.079831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.079876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.080016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.080056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.080168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.080204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.080377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.080488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.080525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.080688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.080728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.080881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.080925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.081037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.081073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.081212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.081248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.081403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.081452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.081562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.081607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.081756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.081805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.081964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.082139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.082283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.082433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.082605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.082761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.082911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.082948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.083059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.083095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.083228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.083264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.083384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.083421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.083548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.083603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.083734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.083771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.083890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.083925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.084040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.084075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.084198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.084248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.084367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.084404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.084537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.084595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.084727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.084762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.084877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.085018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.085053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.204 [2024-11-10 00:11:13.085161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.204 [2024-11-10 00:11:13.085194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.204 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.085301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.085339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.085479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.085515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.085654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.085703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.085822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.085868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.086017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.086051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.086151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.086184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.086300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.086469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.086503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.086648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.086690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.086817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.087041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.087079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.087194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.087229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.087335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.087370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.087482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.087516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.087662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.087698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.087815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.087851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.088963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.088998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.089112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.089147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.089316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.089420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.089455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.089608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.089664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.089777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.089812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.089955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.089989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.090092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.090127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.090237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.090272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.090399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.090434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.090567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.090610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.205 qpair failed and we were unable to recover it. 00:37:47.205 [2024-11-10 00:11:13.090737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.205 [2024-11-10 00:11:13.090776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.090938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.090988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.091141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.091180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.091300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.091336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.091451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.091487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.091652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.091702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.091864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.091974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.092119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.092269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.092433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.092604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.092768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.092920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.092955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.093091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.093127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.093260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.093300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.093411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.093448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.093603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.093653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.093763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.093799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.093941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.093976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.094084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.094120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.094227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.094261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.094369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.094403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.094546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.094583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.094713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.094748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.094863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.094903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.095048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.095202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.095343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.095507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.095675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.095822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.095971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.096006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.096117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.096151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.096291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.096328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.096443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.096479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.206 [2024-11-10 00:11:13.096583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.206 [2024-11-10 00:11:13.096635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.206 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.096770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.096804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.096920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.096955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.097105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.097155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.097268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.097302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.097442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.097479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.097610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.097655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.097766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.097800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.097923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.097958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.098059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.098093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.098250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.098408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.098445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.098580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.098650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.098773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.098811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.098929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.098965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.099100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.099136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.099272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.099428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.099464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.099571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.099622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.099747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.099786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.099919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.099969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.100114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.100150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.100261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.100404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.100438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.100551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.100591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.100729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.100764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.100910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.100946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.101086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.207 [2024-11-10 00:11:13.101120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.207 qpair failed and we were unable to recover it. 00:37:47.207 [2024-11-10 00:11:13.101229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.101263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.101396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.101429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.101548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.101596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.101718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.101753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.101863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.101910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.102019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.102054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.102163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.102197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.102322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.102373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.102534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.102592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.102717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.102753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.102889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.102925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.103071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.103117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.103235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.103270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.103378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.103414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.103541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.103598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.103757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.103794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.103923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.103958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.104061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.104095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.104249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.104300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.104426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.104461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.104578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.104622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.104737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.104770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.104908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.104943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.105049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.105084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.105230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.105265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.105383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.105424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.105528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.105563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.105714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.105751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.105861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.105901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.106005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.106040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.106161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.106196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.106336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.106379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.106502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.106551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.106690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.106740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.208 [2024-11-10 00:11:13.106862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.208 [2024-11-10 00:11:13.106899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.208 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.107942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.107977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.108089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.108124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.108232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.108266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.108399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.108433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.108566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.108624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.108736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.108772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.108931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.108970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.109100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.109149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.109286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.109324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.109434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.109470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.109592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.109628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.109746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.109786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.109910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.109947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.110053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.110088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.110200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.110237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.110375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.110411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.110547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.110604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.110794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.110912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.110947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.111085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.111231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.111410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.111569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.111729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.111879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.111991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.112027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.112161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.112196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.112328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.112485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.209 qpair failed and we were unable to recover it. 00:37:47.209 [2024-11-10 00:11:13.112676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.209 [2024-11-10 00:11:13.112727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.112849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.112896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.113017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.113054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.113174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.113210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.113349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.113383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.113501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.113536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.113680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.113851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.113888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.114961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.114996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.115116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.115151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.115299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.115334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.115475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.115510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.115620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.115656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.115765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.115799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.115906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.115942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.116049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.116085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.116190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.116224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.116402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.116451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.116607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.116657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.116785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.116822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.116932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.116967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.117106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.117142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.117254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.117290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.117419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.117455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.117592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.117644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.117792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.117830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.117946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.117981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.118128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.118163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.210 [2024-11-10 00:11:13.118271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.210 [2024-11-10 00:11:13.118305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.210 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.118416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.118452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.118563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.118607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.118750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.118784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.118896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.118930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.119068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.119103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.119215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.119249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.119361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.119400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.119520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.119557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.119705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.119756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.119904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.119941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.120053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.120088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.120224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.120259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.120398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.120431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.120541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.120575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.120705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.120891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.120941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.121059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.121097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.121208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.121244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.121385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.121421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.121560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.121606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.121730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.121768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.121884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.121922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.122960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.122994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.123127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.123162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.123271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.123308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.123426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.123462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.123600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.123651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.123797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.123833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.123939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.123973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.124071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.211 [2024-11-10 00:11:13.124105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.211 qpair failed and we were unable to recover it. 00:37:47.211 [2024-11-10 00:11:13.124246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.124281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.124382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.124416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.124536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.124597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.124726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.124776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.124898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.124936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.125044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.125080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.125183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.125219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.125364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.125413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.125551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.125593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.125728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.125778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.125934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.125976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.126096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.126132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.126239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.126275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.126393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.126429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.126561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.126612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.126727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.126762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.126871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.126909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.127022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.127056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.127190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.127225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.127334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.127368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.127493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.127543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.127698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.127748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.127897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.127935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.128047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.128082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.128193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.128229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.128336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.128373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.128525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.128575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.128726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.128895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.128931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.129078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.129185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.129219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.212 [2024-11-10 00:11:13.129323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.212 [2024-11-10 00:11:13.129357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.212 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.129494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.129531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.129652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.129691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.129797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.129836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.129954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.129989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.130094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.130128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.130266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.130299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.130414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.130450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.130604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.130656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.130783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.130833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.130960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.130996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.131126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.131160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.131314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.131350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.131455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.131490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.131603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.131640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.131753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.131787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.131940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.131990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.132107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.132151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.132290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.132431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.132470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.132581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.132625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.132755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.132799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.132933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.132975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.133091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.133127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.133268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.133380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.133415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.133530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.133564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.133726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.133780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.133934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.133972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.134088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.134122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.134245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.134280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.134385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.134434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.213 qpair failed and we were unable to recover it. 00:37:47.213 [2024-11-10 00:11:13.134543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.213 [2024-11-10 00:11:13.134577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.134712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.134749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.134874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.134914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.135950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.135985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.136093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.136128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.136281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.136318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.136435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.136471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.136576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.136618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.136761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.136797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.136903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.136939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.137049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.137083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.137186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.137221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.137323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.137359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.137490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.137540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.137673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.137723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.137863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.137899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.138037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.138206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.138371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.138507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.138693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.138874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.138997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.139034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.139142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.139176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.139284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.139322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.139453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.139503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.139634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.214 [2024-11-10 00:11:13.139670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.214 qpair failed and we were unable to recover it. 00:37:47.214 [2024-11-10 00:11:13.139779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.139814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.139942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.139977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.140080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.140116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.140237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.140271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.140382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.140417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.140564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.140636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.140751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.140789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.140903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.140940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.141091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.141231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.141516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.141696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.141860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.141977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.142116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.142295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.142444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.142593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.142743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.142886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.142923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.143036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.143071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.143178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.143215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.143370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.143420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.143548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.143607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.143726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.143879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.143916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.144022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.144057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.144199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.144234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.144351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.144386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.144527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.144566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.144707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.144757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.144877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.144913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.145016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.145050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.145184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.145218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.145349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.145385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.145507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.215 [2024-11-10 00:11:13.145557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.215 qpair failed and we were unable to recover it. 00:37:47.215 [2024-11-10 00:11:13.145701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.145751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.145875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.145911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.146018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.146053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.146185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.146236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.146354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.146392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.146523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.146573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.146709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.146747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.146896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.146934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.147962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.147997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.148093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.148128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.148234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.148269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 [2024-11-10 00:11:13.148377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.148414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:47.216 qpair failed and we were unable to recover it. 00:37:47.216 A controller has encountered a failure and is being reset. 00:37:47.216 [2024-11-10 00:11:13.148656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.216 [2024-11-10 00:11:13.148704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:47.216 [2024-11-10 00:11:13.148734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:47.216 [2024-11-10 00:11:13.148781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:47.216 [2024-11-10 00:11:13.148815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:47.216 [2024-11-10 00:11:13.148843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:47.216 [2024-11-10 00:11:13.148874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:47.216 Unable to reset the controller. 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.475 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.739 Malloc0 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.739 [2024-11-10 00:11:13.734557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.739 [2024-11-10 00:11:13.764574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.739 00:11:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3642532 00:37:48.304 Controller properly reset. 00:37:53.574 Initializing NVMe Controllers 00:37:53.574 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:53.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:53.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:53.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:53.574 Initialization complete. Launching workers. 00:37:53.574 Starting thread on core 1 00:37:53.574 Starting thread on core 2 00:37:53.574 Starting thread on core 3 00:37:53.574 Starting thread on core 0 00:37:53.574 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:53.574 00:37:53.574 real 0m11.765s 00:37:53.574 user 0m35.579s 00:37:53.574 sys 0m7.678s 00:37:53.574 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:53.574 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:53.574 ************************************ 00:37:53.575 END TEST nvmf_target_disconnect_tc2 00:37:53.575 ************************************ 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.575 rmmod nvme_tcp 00:37:53.575 rmmod nvme_fabrics 00:37:53.575 rmmod nvme_keyring 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3642945 ']' 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3642945 00:37:53.575 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3642945 ']' 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3642945 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3642945 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3642945' 00:37:53.576 killing process with pid 3642945 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3642945 00:37:53.576 00:11:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3642945 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.150 00:11:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.682 00:11:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.682 00:37:56.682 real 0m17.837s 00:37:56.682 user 1m4.417s 00:37:56.682 sys 0m10.412s 00:37:56.682 00:11:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:56.682 00:11:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:56.682 ************************************ 00:37:56.682 END TEST nvmf_target_disconnect 00:37:56.682 ************************************ 00:37:56.682 00:11:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:56.682 00:37:56.682 real 7m39.897s 00:37:56.682 user 19m52.317s 00:37:56.682 sys 1m33.778s 00:37:56.682 00:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:56.682 00:11:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.682 ************************************ 00:37:56.682 END TEST nvmf_host 00:37:56.682 ************************************ 00:37:56.682 00:11:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:56.682 00:11:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:56.682 00:11:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:56.682 00:11:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:56.682 00:11:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:56.682 00:11:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:56.682 ************************************ 00:37:56.682 START TEST nvmf_target_core_interrupt_mode 00:37:56.682 ************************************ 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:56.682 * Looking for test storage... 00:37:56.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.682 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:56.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.683 --rc genhtml_branch_coverage=1 00:37:56.683 --rc genhtml_function_coverage=1 00:37:56.683 --rc genhtml_legend=1 00:37:56.683 --rc geninfo_all_blocks=1 00:37:56.683 --rc geninfo_unexecuted_blocks=1 00:37:56.683 00:37:56.683 ' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:56.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.683 --rc genhtml_branch_coverage=1 00:37:56.683 --rc genhtml_function_coverage=1 00:37:56.683 --rc genhtml_legend=1 00:37:56.683 --rc geninfo_all_blocks=1 00:37:56.683 --rc geninfo_unexecuted_blocks=1 00:37:56.683 00:37:56.683 ' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:56.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.683 --rc genhtml_branch_coverage=1 00:37:56.683 --rc genhtml_function_coverage=1 00:37:56.683 --rc genhtml_legend=1 00:37:56.683 --rc geninfo_all_blocks=1 00:37:56.683 --rc geninfo_unexecuted_blocks=1 00:37:56.683 00:37:56.683 ' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:56.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.683 --rc genhtml_branch_coverage=1 00:37:56.683 --rc genhtml_function_coverage=1 00:37:56.683 --rc genhtml_legend=1 00:37:56.683 --rc geninfo_all_blocks=1 00:37:56.683 --rc geninfo_unexecuted_blocks=1 00:37:56.683 00:37:56.683 ' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.683 ************************************ 00:37:56.683 START TEST nvmf_abort 00:37:56.683 ************************************ 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:56.683 * Looking for test storage... 00:37:56.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.683 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:56.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.684 --rc genhtml_branch_coverage=1 00:37:56.684 --rc genhtml_function_coverage=1 00:37:56.684 --rc genhtml_legend=1 00:37:56.684 --rc geninfo_all_blocks=1 00:37:56.684 --rc geninfo_unexecuted_blocks=1 00:37:56.684 00:37:56.684 ' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:56.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.684 --rc genhtml_branch_coverage=1 00:37:56.684 --rc genhtml_function_coverage=1 00:37:56.684 --rc genhtml_legend=1 00:37:56.684 --rc geninfo_all_blocks=1 00:37:56.684 --rc geninfo_unexecuted_blocks=1 00:37:56.684 00:37:56.684 ' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:56.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.684 --rc genhtml_branch_coverage=1 00:37:56.684 --rc genhtml_function_coverage=1 00:37:56.684 --rc genhtml_legend=1 00:37:56.684 --rc geninfo_all_blocks=1 00:37:56.684 --rc geninfo_unexecuted_blocks=1 00:37:56.684 00:37:56.684 ' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:56.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.684 --rc genhtml_branch_coverage=1 00:37:56.684 --rc genhtml_function_coverage=1 00:37:56.684 --rc genhtml_legend=1 00:37:56.684 --rc geninfo_all_blocks=1 00:37:56.684 --rc geninfo_unexecuted_blocks=1 00:37:56.684 00:37:56.684 ' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.684 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.685 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.685 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.685 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.685 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.685 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.685 00:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.588 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:58.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:58.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:58.589 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:58.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.589 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:37:58.848 00:37:58.848 --- 10.0.0.2 ping statistics --- 00:37:58.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.848 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:37:58.848 00:37:58.848 --- 10.0.0.1 ping statistics --- 00:37:58.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.848 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3645878 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3645878 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3645878 ']' 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:58.848 00:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:58.848 [2024-11-10 00:11:24.977649] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.848 [2024-11-10 00:11:24.980322] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:37:58.848 [2024-11-10 00:11:24.980434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.106 [2024-11-10 00:11:25.137252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:59.106 [2024-11-10 00:11:25.280279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.106 [2024-11-10 00:11:25.280353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.106 [2024-11-10 00:11:25.280383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.106 [2024-11-10 00:11:25.280406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.106 [2024-11-10 00:11:25.280429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.106 [2024-11-10 00:11:25.283173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.106 [2024-11-10 00:11:25.283187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.106 [2024-11-10 00:11:25.283200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.677 [2024-11-10 00:11:25.657421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:59.677 [2024-11-10 00:11:25.658492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:59.677 [2024-11-10 00:11:25.659307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:59.677 [2024-11-10 00:11:25.659677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.936 00:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.936 [2024-11-10 00:11:25.984278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.936 Malloc0 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.936 Delay0 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.936 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.937 [2024-11-10 00:11:26.120469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.937 00:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:00.194 [2024-11-10 00:11:26.284989] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:02.720 Initializing NVMe Controllers 00:38:02.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:02.720 controller IO queue size 128 less than required 00:38:02.720 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:02.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:02.720 Initialization complete. Launching workers. 00:38:02.720 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22716 00:38:02.720 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22773, failed to submit 66 00:38:02.720 success 22716, unsuccessful 57, failed 0 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:02.720 rmmod nvme_tcp 00:38:02.720 rmmod nvme_fabrics 00:38:02.720 rmmod nvme_keyring 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3645878 ']' 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3645878 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3645878 ']' 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3645878 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3645878 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3645878' 00:38:02.720 killing process with pid 3645878 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3645878 00:38:02.720 00:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3645878 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.658 00:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.191 00:38:06.191 real 0m9.227s 00:38:06.191 user 0m11.573s 00:38:06.191 sys 0m3.061s 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:06.191 ************************************ 00:38:06.191 END TEST nvmf_abort 00:38:06.191 ************************************ 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:06.191 ************************************ 00:38:06.191 START TEST nvmf_ns_hotplug_stress 00:38:06.191 ************************************ 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:06.191 * Looking for test storage... 00:38:06.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:38:06.191 00:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.191 --rc genhtml_branch_coverage=1 00:38:06.191 --rc genhtml_function_coverage=1 00:38:06.191 --rc genhtml_legend=1 00:38:06.191 --rc geninfo_all_blocks=1 00:38:06.191 --rc geninfo_unexecuted_blocks=1 00:38:06.191 00:38:06.191 ' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.191 --rc genhtml_branch_coverage=1 00:38:06.191 --rc genhtml_function_coverage=1 00:38:06.191 --rc genhtml_legend=1 00:38:06.191 --rc geninfo_all_blocks=1 00:38:06.191 --rc geninfo_unexecuted_blocks=1 00:38:06.191 00:38:06.191 ' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.191 --rc genhtml_branch_coverage=1 00:38:06.191 --rc genhtml_function_coverage=1 00:38:06.191 --rc genhtml_legend=1 00:38:06.191 --rc geninfo_all_blocks=1 00:38:06.191 --rc geninfo_unexecuted_blocks=1 00:38:06.191 00:38:06.191 ' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:06.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.191 --rc genhtml_branch_coverage=1 00:38:06.191 --rc genhtml_function_coverage=1 00:38:06.191 --rc genhtml_legend=1 00:38:06.191 --rc geninfo_all_blocks=1 00:38:06.191 --rc geninfo_unexecuted_blocks=1 00:38:06.191 00:38:06.191 ' 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.191 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.192 00:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:08.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:08.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:08.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:08.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.091 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:08.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:38:08.092 00:38:08.092 --- 10.0.0.2 ping statistics --- 00:38:08.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.092 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:38:08.092 00:38:08.092 --- 10.0.0.1 ping statistics --- 00:38:08.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.092 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:08.092 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3648474 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3648474 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3648474 ']' 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:08.349 00:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:08.349 [2024-11-10 00:11:34.391480] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:08.349 [2024-11-10 00:11:34.394065] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:38:08.349 [2024-11-10 00:11:34.394177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.349 [2024-11-10 00:11:34.549904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:08.607 [2024-11-10 00:11:34.688600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.607 [2024-11-10 00:11:34.688672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.607 [2024-11-10 00:11:34.688700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.607 [2024-11-10 00:11:34.688723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.607 [2024-11-10 00:11:34.688748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.607 [2024-11-10 00:11:34.691450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.607 [2024-11-10 00:11:34.691538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.607 [2024-11-10 00:11:34.691547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.865 [2024-11-10 00:11:35.062909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:08.865 [2024-11-10 00:11:35.063986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:08.865 [2024-11-10 00:11:35.064790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:08.865 [2024-11-10 00:11:35.065154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:09.430 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:09.430 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:38:09.431 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.431 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:09.431 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:09.431 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.431 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:09.431 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:09.431 [2024-11-10 00:11:35.628641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.688 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:09.947 00:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:10.204 [2024-11-10 00:11:36.225239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:10.204 00:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:10.463 00:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:10.722 Malloc0 00:38:10.722 00:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:10.980 Delay0 00:38:10.980 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.238 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:11.496 NULL1 00:38:11.496 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:11.754 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3648916 00:38:11.754 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:11.754 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:11.754 00:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.133 Read completed with error (sct=0, sc=11) 00:38:13.133 00:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.391 00:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:13.391 00:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:13.648 true 00:38:13.648 00:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:13.648 00:11:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.582 00:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.839 00:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:14.839 00:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:15.097 true 00:38:15.097 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:15.097 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.355 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.613 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:15.613 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:15.870 true 00:38:15.870 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:15.870 00:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.127 00:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.387 00:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:16.387 00:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:16.646 true 00:38:16.646 00:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:16.646 00:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.579 00:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.838 00:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:17.838 00:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:18.095 true 00:38:18.095 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:18.095 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.352 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.612 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:18.612 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:18.870 true 00:38:18.870 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:18.870 00:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.127 00:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.384 00:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:19.384 00:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:19.643 true 00:38:19.643 00:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:19.643 00:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.016 00:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.016 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:21.016 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:21.274 true 00:38:21.274 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:21.274 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.532 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.790 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:21.790 00:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:22.048 true 00:38:22.048 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:22.048 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.305 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.562 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:22.563 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:22.820 true 00:38:22.820 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:22.820 00:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:23.753 00:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.011 00:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:24.011 00:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:24.273 true 00:38:24.533 00:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:24.533 00:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.791 00:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.049 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:25.049 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:25.307 true 00:38:25.307 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:25.307 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.565 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.823 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:25.823 00:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:26.081 true 00:38:26.081 00:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:26.081 00:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.019 00:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.277 00:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:27.277 00:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:27.535 true 00:38:27.535 00:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:27.535 00:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.793 00:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.050 00:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:28.050 00:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:28.308 true 00:38:28.308 00:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:28.308 00:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.242 00:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.500 00:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:29.500 00:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:29.758 true 00:38:29.758 00:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:29.758 00:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.015 00:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.273 00:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:30.273 00:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:30.532 true 00:38:30.532 00:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:30.532 00:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.790 00:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.048 00:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:31.048 00:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:31.306 true 00:38:31.306 00:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:31.306 00:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.237 00:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.494 00:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:32.494 00:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:32.752 true 00:38:32.752 00:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:32.752 00:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.010 00:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.268 00:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:33.269 00:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:33.526 true 00:38:33.526 00:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:33.526 00:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.784 00:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.042 00:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:34.042 00:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:34.301 true 00:38:34.559 00:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:34.559 00:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.492 00:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.749 00:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:35.749 00:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:36.007 true 00:38:36.007 00:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:36.007 00:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.264 00:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.521 00:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:36.521 00:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:36.781 true 00:38:36.781 00:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:36.781 00:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.062 00:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.337 00:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:37.337 00:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:37.605 true 00:38:37.605 00:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:37.606 00:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:38.540 00:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.797 00:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:38.797 00:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:39.053 true 00:38:39.053 00:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:39.053 00:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.310 00:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.876 00:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:39.876 00:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:39.876 true 00:38:39.876 00:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:39.876 00:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:40.809 00:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:40.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:40.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:41.066 00:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:41.066 00:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:41.324 true 00:38:41.324 00:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:41.324 00:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.582 00:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.146 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:42.146 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:42.146 Initializing NVMe Controllers 00:38:42.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:42.146 Controller IO queue size 128, less than required. 00:38:42.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:42.146 Controller IO queue size 128, less than required. 00:38:42.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:42.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:42.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:42.146 Initialization complete. Launching workers. 00:38:42.146 ======================================================== 00:38:42.146 Latency(us) 00:38:42.146 Device Information : IOPS MiB/s Average min max 00:38:42.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 560.09 0.27 94570.64 3184.09 1027819.25 00:38:42.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6637.57 3.24 19283.20 3070.01 477638.55 00:38:42.146 ======================================================== 00:38:42.146 Total : 7197.65 3.51 25141.71 3070.01 1027819.25 00:38:42.146 00:38:42.146 true 00:38:42.146 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3648916 00:38:42.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3648916) - No such process 00:38:42.146 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3648916 00:38:42.146 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.711 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.969 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:42.969 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:42.969 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:42.969 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:42.969 00:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:43.227 null0 00:38:43.227 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.227 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.227 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:43.485 null1 00:38:43.485 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.485 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.485 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:43.743 null2 00:38:43.743 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:43.743 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:43.743 00:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:44.001 null3 00:38:44.001 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.001 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.001 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:44.259 null4 00:38:44.259 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.259 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.259 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:44.516 null5 00:38:44.516 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.516 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.516 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:44.774 null6 00:38:44.774 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:44.774 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:44.774 00:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:45.033 null7 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.033 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3653409 3653410 3653412 3653414 3653416 3653418 3653420 3653422 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.034 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.292 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.292 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.293 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.293 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.293 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.293 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:45.293 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.293 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:45.551 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:45.809 00:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.067 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:46.324 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.324 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.324 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:46.582 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:46.840 00:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.099 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.357 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.616 00:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.875 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.132 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.132 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.390 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.390 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.390 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.390 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.390 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.390 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.648 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.907 00:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.166 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.424 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.682 00:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.940 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.940 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.940 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.940 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.940 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.197 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.197 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.197 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.456 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.714 00:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.973 rmmod nvme_tcp 00:38:50.973 rmmod nvme_fabrics 00:38:50.973 rmmod nvme_keyring 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3648474 ']' 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3648474 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3648474 ']' 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3648474 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3648474 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3648474' 00:38:50.973 killing process with pid 3648474 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3648474 00:38:50.973 00:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3648474 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.348 00:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:54.259 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:54.259 00:38:54.259 real 0m48.533s 00:38:54.259 user 3m18.231s 00:38:54.259 sys 0m21.459s 00:38:54.259 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:54.259 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:54.259 ************************************ 00:38:54.259 END TEST nvmf_ns_hotplug_stress 00:38:54.259 ************************************ 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:54.520 ************************************ 00:38:54.520 START TEST nvmf_delete_subsystem 00:38:54.520 ************************************ 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:54.520 * Looking for test storage... 00:38:54.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:54.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.520 --rc genhtml_branch_coverage=1 00:38:54.520 --rc genhtml_function_coverage=1 00:38:54.520 --rc genhtml_legend=1 00:38:54.520 --rc geninfo_all_blocks=1 00:38:54.520 --rc geninfo_unexecuted_blocks=1 00:38:54.520 00:38:54.520 ' 00:38:54.520 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:54.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.520 --rc genhtml_branch_coverage=1 00:38:54.520 --rc genhtml_function_coverage=1 00:38:54.520 --rc genhtml_legend=1 00:38:54.520 --rc geninfo_all_blocks=1 00:38:54.520 --rc geninfo_unexecuted_blocks=1 00:38:54.521 00:38:54.521 ' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:54.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.521 --rc genhtml_branch_coverage=1 00:38:54.521 --rc genhtml_function_coverage=1 00:38:54.521 --rc genhtml_legend=1 00:38:54.521 --rc geninfo_all_blocks=1 00:38:54.521 --rc geninfo_unexecuted_blocks=1 00:38:54.521 00:38:54.521 ' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:54.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.521 --rc genhtml_branch_coverage=1 00:38:54.521 --rc genhtml_function_coverage=1 00:38:54.521 --rc genhtml_legend=1 00:38:54.521 --rc geninfo_all_blocks=1 00:38:54.521 --rc geninfo_unexecuted_blocks=1 00:38:54.521 00:38:54.521 ' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:54.521 00:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:56.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:56.422 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:56.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:56.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:56.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:56.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:56.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:38:56.423 00:38:56.423 --- 10.0.0.2 ping statistics --- 00:38:56.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:56.423 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:56.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:56.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:38:56.423 00:38:56.423 --- 10.0.0.1 ping statistics --- 00:38:56.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:56.423 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:38:56.423 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:56.424 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:56.424 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:56.424 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:56.424 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3656292 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3656292 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3656292 ']' 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:56.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:56.682 00:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.682 [2024-11-10 00:12:22.740633] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:56.682 [2024-11-10 00:12:22.743294] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:38:56.682 [2024-11-10 00:12:22.743414] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:56.941 [2024-11-10 00:12:22.900957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:56.941 [2024-11-10 00:12:23.040547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:56.941 [2024-11-10 00:12:23.040638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:56.941 [2024-11-10 00:12:23.040668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:56.941 [2024-11-10 00:12:23.040690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:56.941 [2024-11-10 00:12:23.040721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:56.941 [2024-11-10 00:12:23.043336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.941 [2024-11-10 00:12:23.043344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.199 [2024-11-10 00:12:23.375560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:57.199 [2024-11-10 00:12:23.376178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.199 [2024-11-10 00:12:23.376463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 [2024-11-10 00:12:23.724361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 [2024-11-10 00:12:23.744684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 NULL1 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 Delay0 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3656444 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:57.765 00:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:57.765 [2024-11-10 00:12:23.880085] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:59.661 00:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:59.661 00:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.661 00:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 starting I/O failed: -6 00:38:59.920 Write completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 [2024-11-10 00:12:25.948229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.920 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 starting I/O failed: -6 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 [2024-11-10 00:12:25.949555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 [2024-11-10 00:12:25.950258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 Write completed with error (sct=0, sc=8) 00:38:59.921 Read completed with error (sct=0, sc=8) 00:38:59.921 [2024-11-10 00:12:25.951318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:39:00.855 [2024-11-10 00:12:26.914410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 [2024-11-10 00:12:26.950491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 [2024-11-10 00:12:26.951330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 [2024-11-10 00:12:26.952886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Write completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 Read completed with error (sct=0, sc=8) 00:39:00.855 [2024-11-10 00:12:26.956535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:39:00.855 00:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.855 00:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:00.855 00:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3656444 00:39:00.855 00:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:00.855 Initializing NVMe Controllers 00:39:00.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:00.855 Controller IO queue size 128, less than required. 00:39:00.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:00.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:00.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:00.855 Initialization complete. Launching workers. 00:39:00.855 ======================================================== 00:39:00.855 Latency(us) 00:39:00.855 Device Information : IOPS MiB/s Average min max 00:39:00.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.73 0.08 894979.54 1086.83 1017555.36 00:39:00.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.29 0.08 905901.64 1777.59 1016937.05 00:39:00.856 ======================================================== 00:39:00.856 Total : 336.02 0.16 900352.12 1086.83 1017555.36 00:39:00.856 00:39:00.856 [2024-11-10 00:12:26.958316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:39:00.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3656444 00:39:01.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3656444) - No such process 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3656444 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3656444 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3656444 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.422 [2024-11-10 00:12:27.480783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3656959 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:01.422 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:01.422 [2024-11-10 00:12:27.598275] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:01.988 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:01.988 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:01.988 00:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.553 00:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.553 00:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:02.553 00:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:02.814 00:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:02.814 00:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:02.814 00:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:03.380 00:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:03.380 00:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:03.380 00:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:03.977 00:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:03.977 00:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:03.977 00:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.544 00:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:04.544 00:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:04.544 00:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:04.801 Initializing NVMe Controllers 00:39:04.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:04.801 Controller IO queue size 128, less than required. 00:39:04.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:04.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:04.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:04.801 Initialization complete. Launching workers. 00:39:04.801 ======================================================== 00:39:04.801 Latency(us) 00:39:04.801 Device Information : IOPS MiB/s Average min max 00:39:04.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006306.86 1000332.52 1041878.89 00:39:04.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006907.36 1000242.28 1045496.85 00:39:04.801 ======================================================== 00:39:04.801 Total : 256.00 0.12 1006607.11 1000242.28 1045496.85 00:39:04.801 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3656959 00:39:05.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3656959) - No such process 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3656959 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:05.059 rmmod nvme_tcp 00:39:05.059 rmmod nvme_fabrics 00:39:05.059 rmmod nvme_keyring 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3656292 ']' 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3656292 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3656292 ']' 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3656292 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3656292 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3656292' 00:39:05.059 killing process with pid 3656292 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3656292 00:39:05.059 00:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3656292 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.433 00:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.346 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.346 00:39:08.346 real 0m13.829s 00:39:08.346 user 0m26.202s 00:39:08.346 sys 0m3.778s 00:39:08.346 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:08.346 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:08.346 ************************************ 00:39:08.346 END TEST nvmf_delete_subsystem 00:39:08.346 ************************************ 00:39:08.346 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:08.346 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:08.347 ************************************ 00:39:08.347 START TEST nvmf_host_management 00:39:08.347 ************************************ 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:08.347 * Looking for test storage... 00:39:08.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.347 --rc genhtml_branch_coverage=1 00:39:08.347 --rc genhtml_function_coverage=1 00:39:08.347 --rc genhtml_legend=1 00:39:08.347 --rc geninfo_all_blocks=1 00:39:08.347 --rc geninfo_unexecuted_blocks=1 00:39:08.347 00:39:08.347 ' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.347 --rc genhtml_branch_coverage=1 00:39:08.347 --rc genhtml_function_coverage=1 00:39:08.347 --rc genhtml_legend=1 00:39:08.347 --rc geninfo_all_blocks=1 00:39:08.347 --rc geninfo_unexecuted_blocks=1 00:39:08.347 00:39:08.347 ' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.347 --rc genhtml_branch_coverage=1 00:39:08.347 --rc genhtml_function_coverage=1 00:39:08.347 --rc genhtml_legend=1 00:39:08.347 --rc geninfo_all_blocks=1 00:39:08.347 --rc geninfo_unexecuted_blocks=1 00:39:08.347 00:39:08.347 ' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:08.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.347 --rc genhtml_branch_coverage=1 00:39:08.347 --rc genhtml_function_coverage=1 00:39:08.347 --rc genhtml_legend=1 00:39:08.347 --rc geninfo_all_blocks=1 00:39:08.347 --rc geninfo_unexecuted_blocks=1 00:39:08.347 00:39:08.347 ' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.347 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:08.348 00:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:10.886 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:10.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:10.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:10.887 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:10.887 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:10.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:39:10.887 00:39:10.887 --- 10.0.0.2 ping statistics --- 00:39:10.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.887 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:39:10.887 00:39:10.887 --- 10.0.0.1 ping statistics --- 00:39:10.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.887 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3659427 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3659427 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3659427 ']' 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:10.887 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:10.888 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:10.888 00:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.888 [2024-11-10 00:12:36.965087] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:10.888 [2024-11-10 00:12:36.967673] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:39:10.888 [2024-11-10 00:12:36.967788] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.146 [2024-11-10 00:12:37.110315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:11.146 [2024-11-10 00:12:37.236019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.146 [2024-11-10 00:12:37.236082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.146 [2024-11-10 00:12:37.236109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.146 [2024-11-10 00:12:37.236127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.146 [2024-11-10 00:12:37.236147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.146 [2024-11-10 00:12:37.238711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.146 [2024-11-10 00:12:37.238774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:11.146 [2024-11-10 00:12:37.238815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.146 [2024-11-10 00:12:37.238826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:11.414 [2024-11-10 00:12:37.563226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:11.414 [2024-11-10 00:12:37.579897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:11.414 [2024-11-10 00:12:37.580062] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:11.414 [2024-11-10 00:12:37.580892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:11.414 [2024-11-10 00:12:37.581194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.981 [2024-11-10 00:12:37.947865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.981 00:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.981 Malloc0 00:39:11.981 [2024-11-10 00:12:38.060148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3659603 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3659603 /var/tmp/bdevperf.sock 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3659603 ']' 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:11.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.981 { 00:39:11.981 "params": { 00:39:11.981 "name": "Nvme$subsystem", 00:39:11.981 "trtype": "$TEST_TRANSPORT", 00:39:11.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.981 "adrfam": "ipv4", 00:39:11.981 "trsvcid": "$NVMF_PORT", 00:39:11.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.981 "hdgst": ${hdgst:-false}, 00:39:11.981 "ddgst": ${ddgst:-false} 00:39:11.981 }, 00:39:11.981 "method": "bdev_nvme_attach_controller" 00:39:11.981 } 00:39:11.981 EOF 00:39:11.981 )") 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:11.981 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:11.982 00:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.982 "params": { 00:39:11.982 "name": "Nvme0", 00:39:11.982 "trtype": "tcp", 00:39:11.982 "traddr": "10.0.0.2", 00:39:11.982 "adrfam": "ipv4", 00:39:11.982 "trsvcid": "4420", 00:39:11.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:11.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:11.982 "hdgst": false, 00:39:11.982 "ddgst": false 00:39:11.982 }, 00:39:11.982 "method": "bdev_nvme_attach_controller" 00:39:11.982 }' 00:39:11.982 [2024-11-10 00:12:38.180057] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:39:11.982 [2024-11-10 00:12:38.180207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659603 ] 00:39:12.239 [2024-11-10 00:12:38.324783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.497 [2024-11-10 00:12:38.453004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.064 Running I/O for 10 seconds... 00:39:13.064 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:13.064 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:39:13.064 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.065 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.065 [2024-11-10 00:12:39.209242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.209971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:13.065 [2024-11-10 00:12:39.210150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.065 [2024-11-10 00:12:39.210717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.065 [2024-11-10 00:12:39.210739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.210763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.210784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.210808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.210829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.210854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.210886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.210910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.210932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.210962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.210984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.211962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.211983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.212010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.212055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.212100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.212147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.066 [2024-11-10 00:12:39.212193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.212239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.066 [2024-11-10 00:12:39.212284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.066 [2024-11-10 00:12:39.212306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:13.067 [2024-11-10 00:12:39.212351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.067 [2024-11-10 00:12:39.212471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:13.067 [2024-11-10 00:12:39.212583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.212992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.213017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.213038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.213061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.213082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.213105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.213151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.213173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.213196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.213217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.213240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.067 [2024-11-10 00:12:39.213262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:13.067 [2024-11-10 00:12:39.214815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:13.067 task offset: 32768 on job bdev=Nvme0n1 fails 00:39:13.067 00:39:13.067 Latency(us) 00:39:13.067 [2024-11-09T23:12:39.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.067 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:13.067 Job: Nvme0n1 ended in about 0.22 seconds with error 00:39:13.067 Verification LBA range: start 0x0 length 0x400 00:39:13.067 Nvme0n1 : 0.22 1189.25 74.33 297.31 0.00 40878.00 6213.78 40972.14 00:39:13.067 [2024-11-09T23:12:39.268Z] =================================================================================================================== 00:39:13.067 [2024-11-09T23:12:39.268Z] Total : 1189.25 74.33 297.31 0.00 40878.00 6213.78 40972.14 00:39:13.067 [2024-11-10 00:12:39.219717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:13.067 [2024-11-10 00:12:39.219765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:13.067 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.067 00:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:13.067 [2024-11-10 00:12:39.226309] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3659603 00:39:14.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3659603) - No such process 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:14.438 { 00:39:14.438 "params": { 00:39:14.438 "name": "Nvme$subsystem", 00:39:14.438 "trtype": "$TEST_TRANSPORT", 00:39:14.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.438 "adrfam": "ipv4", 00:39:14.438 "trsvcid": "$NVMF_PORT", 00:39:14.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.438 "hdgst": ${hdgst:-false}, 00:39:14.438 "ddgst": ${ddgst:-false} 00:39:14.438 }, 00:39:14.438 "method": "bdev_nvme_attach_controller" 00:39:14.438 } 00:39:14.438 EOF 00:39:14.438 )") 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:14.438 00:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:14.438 "params": { 00:39:14.438 "name": "Nvme0", 00:39:14.438 "trtype": "tcp", 00:39:14.438 "traddr": "10.0.0.2", 00:39:14.438 "adrfam": "ipv4", 00:39:14.438 "trsvcid": "4420", 00:39:14.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.438 "hdgst": false, 00:39:14.438 "ddgst": false 00:39:14.438 }, 00:39:14.438 "method": "bdev_nvme_attach_controller" 00:39:14.438 }' 00:39:14.438 [2024-11-10 00:12:40.308201] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:39:14.438 [2024-11-10 00:12:40.308359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659882 ] 00:39:14.438 [2024-11-10 00:12:40.447336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.438 [2024-11-10 00:12:40.575742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.001 Running I/O for 1 seconds... 00:39:15.933 1374.00 IOPS, 85.88 MiB/s 00:39:15.933 Latency(us) 00:39:15.933 [2024-11-09T23:12:42.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.933 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:15.933 Verification LBA range: start 0x0 length 0x400 00:39:15.933 Nvme0n1 : 1.02 1404.93 87.81 0.00 0.00 44511.06 3907.89 40001.23 00:39:15.933 [2024-11-09T23:12:42.134Z] =================================================================================================================== 00:39:15.933 [2024-11-09T23:12:42.134Z] Total : 1404.93 87.81 0.00 0.00 44511.06 3907.89 40001.23 00:39:16.866 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:16.866 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:16.866 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:16.866 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:16.866 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:16.866 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:16.867 rmmod nvme_tcp 00:39:16.867 rmmod nvme_fabrics 00:39:16.867 rmmod nvme_keyring 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3659427 ']' 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3659427 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3659427 ']' 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3659427 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3659427 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3659427' 00:39:16.867 killing process with pid 3659427 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3659427 00:39:16.867 00:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3659427 00:39:18.242 [2024-11-10 00:12:44.222559] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.242 00:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:20.776 00:39:20.776 real 0m11.992s 00:39:20.776 user 0m25.461s 00:39:20.776 sys 0m4.551s 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.776 ************************************ 00:39:20.776 END TEST nvmf_host_management 00:39:20.776 ************************************ 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:20.776 ************************************ 00:39:20.776 START TEST nvmf_lvol 00:39:20.776 ************************************ 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:20.776 * Looking for test storage... 00:39:20.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:20.776 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:20.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.777 --rc genhtml_branch_coverage=1 00:39:20.777 --rc genhtml_function_coverage=1 00:39:20.777 --rc genhtml_legend=1 00:39:20.777 --rc geninfo_all_blocks=1 00:39:20.777 --rc geninfo_unexecuted_blocks=1 00:39:20.777 00:39:20.777 ' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:20.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.777 --rc genhtml_branch_coverage=1 00:39:20.777 --rc genhtml_function_coverage=1 00:39:20.777 --rc genhtml_legend=1 00:39:20.777 --rc geninfo_all_blocks=1 00:39:20.777 --rc geninfo_unexecuted_blocks=1 00:39:20.777 00:39:20.777 ' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:20.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.777 --rc genhtml_branch_coverage=1 00:39:20.777 --rc genhtml_function_coverage=1 00:39:20.777 --rc genhtml_legend=1 00:39:20.777 --rc geninfo_all_blocks=1 00:39:20.777 --rc geninfo_unexecuted_blocks=1 00:39:20.777 00:39:20.777 ' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:20.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.777 --rc genhtml_branch_coverage=1 00:39:20.777 --rc genhtml_function_coverage=1 00:39:20.777 --rc genhtml_legend=1 00:39:20.777 --rc geninfo_all_blocks=1 00:39:20.777 --rc geninfo_unexecuted_blocks=1 00:39:20.777 00:39:20.777 ' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:20.777 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:20.778 00:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.678 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:22.679 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:22.679 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:22.679 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:22.679 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:22.679 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:22.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:22.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:39:22.679 00:39:22.679 --- 10.0.0.2 ping statistics --- 00:39:22.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.679 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:22.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:22.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:39:22.680 00:39:22.680 --- 10.0.0.1 ping statistics --- 00:39:22.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:22.680 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3662222 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3662222 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3662222 ']' 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:22.680 00:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:22.938 [2024-11-10 00:12:48.962735] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:22.938 [2024-11-10 00:12:48.965390] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:39:22.938 [2024-11-10 00:12:48.965498] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.938 [2024-11-10 00:12:49.122946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:23.196 [2024-11-10 00:12:49.263061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:23.196 [2024-11-10 00:12:49.263134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:23.196 [2024-11-10 00:12:49.263163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:23.196 [2024-11-10 00:12:49.263185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:23.196 [2024-11-10 00:12:49.263217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:23.196 [2024-11-10 00:12:49.265795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:23.196 [2024-11-10 00:12:49.265868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.196 [2024-11-10 00:12:49.265876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:23.455 [2024-11-10 00:12:49.629233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:23.455 [2024-11-10 00:12:49.630319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:23.455 [2024-11-10 00:12:49.631149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:23.455 [2024-11-10 00:12:49.631490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:23.712 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:23.713 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:39:23.713 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:23.713 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:23.713 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:23.970 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:23.970 00:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:24.229 [2024-11-10 00:12:50.186943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.229 00:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:24.487 00:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:24.487 00:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:24.744 00:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:24.745 00:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:25.311 00:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:25.569 00:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4d935fd5-d0c5-4c1b-9b94-a0a476577286 00:39:25.569 00:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d935fd5-d0c5-4c1b-9b94-a0a476577286 lvol 20 00:39:25.828 00:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=52040967-604c-4236-8c3f-77be5fd2a9b7 00:39:25.828 00:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:26.086 00:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52040967-604c-4236-8c3f-77be5fd2a9b7 00:39:26.350 00:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:26.607 [2024-11-10 00:12:52.563068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.607 00:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:26.866 00:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3662769 00:39:26.866 00:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:26.866 00:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:27.805 00:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 52040967-604c-4236-8c3f-77be5fd2a9b7 MY_SNAPSHOT 00:39:28.063 00:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b25c06b6-8940-4a8d-bb8d-eedc22f88ef9 00:39:28.063 00:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 52040967-604c-4236-8c3f-77be5fd2a9b7 30 00:39:28.321 00:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b25c06b6-8940-4a8d-bb8d-eedc22f88ef9 MY_CLONE 00:39:28.888 00:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8263f1a3-e2cb-4960-9b7e-8651036115d5 00:39:28.888 00:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8263f1a3-e2cb-4960-9b7e-8651036115d5 00:39:29.453 00:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3662769 00:39:37.560 Initializing NVMe Controllers 00:39:37.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:37.560 Controller IO queue size 128, less than required. 00:39:37.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:37.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:37.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:37.560 Initialization complete. Launching workers. 00:39:37.560 ======================================================== 00:39:37.560 Latency(us) 00:39:37.560 Device Information : IOPS MiB/s Average min max 00:39:37.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8074.80 31.54 15862.74 631.63 146113.13 00:39:37.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8222.90 32.12 15576.97 4861.52 159217.61 00:39:37.560 ======================================================== 00:39:37.560 Total : 16297.70 63.66 15718.56 631.63 159217.61 00:39:37.560 00:39:37.560 00:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:37.560 00:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52040967-604c-4236-8c3f-77be5fd2a9b7 00:39:37.818 00:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d935fd5-d0c5-4c1b-9b94-a0a476577286 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:38.076 rmmod nvme_tcp 00:39:38.076 rmmod nvme_fabrics 00:39:38.076 rmmod nvme_keyring 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3662222 ']' 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3662222 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3662222 ']' 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3662222 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3662222 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3662222' 00:39:38.076 killing process with pid 3662222 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3662222 00:39:38.076 00:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3662222 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:39.449 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:39.707 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.707 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:39.707 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.707 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.707 00:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:41.609 00:39:41.609 real 0m21.276s 00:39:41.609 user 0m57.871s 00:39:41.609 sys 0m7.949s 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:41.609 ************************************ 00:39:41.609 END TEST nvmf_lvol 00:39:41.609 ************************************ 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:41.609 ************************************ 00:39:41.609 START TEST nvmf_lvs_grow 00:39:41.609 ************************************ 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:41.609 * Looking for test storage... 00:39:41.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:39:41.609 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:41.868 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.869 --rc genhtml_branch_coverage=1 00:39:41.869 --rc genhtml_function_coverage=1 00:39:41.869 --rc genhtml_legend=1 00:39:41.869 --rc geninfo_all_blocks=1 00:39:41.869 --rc geninfo_unexecuted_blocks=1 00:39:41.869 00:39:41.869 ' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.869 --rc genhtml_branch_coverage=1 00:39:41.869 --rc genhtml_function_coverage=1 00:39:41.869 --rc genhtml_legend=1 00:39:41.869 --rc geninfo_all_blocks=1 00:39:41.869 --rc geninfo_unexecuted_blocks=1 00:39:41.869 00:39:41.869 ' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.869 --rc genhtml_branch_coverage=1 00:39:41.869 --rc genhtml_function_coverage=1 00:39:41.869 --rc genhtml_legend=1 00:39:41.869 --rc geninfo_all_blocks=1 00:39:41.869 --rc geninfo_unexecuted_blocks=1 00:39:41.869 00:39:41.869 ' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.869 --rc genhtml_branch_coverage=1 00:39:41.869 --rc genhtml_function_coverage=1 00:39:41.869 --rc genhtml_legend=1 00:39:41.869 --rc geninfo_all_blocks=1 00:39:41.869 --rc geninfo_unexecuted_blocks=1 00:39:41.869 00:39:41.869 ' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:41.869 00:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:43.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:43.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:43.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.771 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:43.772 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:43.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:43.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:39:43.772 00:39:43.772 --- 10.0.0.2 ping statistics --- 00:39:43.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.772 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:43.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:43.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:39:43.772 00:39:43.772 --- 10.0.0.1 ping statistics --- 00:39:43.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.772 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3666147 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3666147 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3666147 ']' 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:43.772 00:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.030 [2024-11-10 00:13:09.997633] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:44.030 [2024-11-10 00:13:10.000188] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:39:44.030 [2024-11-10 00:13:10.000318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.030 [2024-11-10 00:13:10.151849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.288 [2024-11-10 00:13:10.284562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.288 [2024-11-10 00:13:10.284644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.288 [2024-11-10 00:13:10.284674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.288 [2024-11-10 00:13:10.284696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.288 [2024-11-10 00:13:10.284718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.288 [2024-11-10 00:13:10.286301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.578 [2024-11-10 00:13:10.653196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:44.578 [2024-11-10 00:13:10.653617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:44.857 00:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:45.115 [2024-11-10 00:13:11.251363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:45.115 ************************************ 00:39:45.115 START TEST lvs_grow_clean 00:39:45.115 ************************************ 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:45.115 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:45.372 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:45.372 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:45.936 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6ca60c3b-57a0-425b-b041-9bf79b58324e 00:39:45.936 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:39:45.936 00:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:45.936 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:45.936 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:46.194 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6ca60c3b-57a0-425b-b041-9bf79b58324e lvol 150 00:39:46.451 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=73102e66-1cbc-4e45-b291-8f286f0ceb2d 00:39:46.451 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:46.451 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:46.714 [2024-11-10 00:13:12.679196] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:46.714 [2024-11-10 00:13:12.679373] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:46.714 true 00:39:46.715 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:39:46.715 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:46.976 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:46.976 00:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:47.232 00:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73102e66-1cbc-4e45-b291-8f286f0ceb2d 00:39:47.490 00:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:47.748 [2024-11-10 00:13:13.791694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.748 00:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3666615 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3666615 /var/tmp/bdevperf.sock 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3666615 ']' 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:48.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:48.007 00:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:48.007 [2024-11-10 00:13:14.167869] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:39:48.007 [2024-11-10 00:13:14.168024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666615 ] 00:39:48.265 [2024-11-10 00:13:14.310880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.265 [2024-11-10 00:13:14.445819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.199 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:49.200 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:39:49.200 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:49.771 Nvme0n1 00:39:49.771 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:50.030 [ 00:39:50.030 { 00:39:50.030 "name": "Nvme0n1", 00:39:50.030 "aliases": [ 00:39:50.030 "73102e66-1cbc-4e45-b291-8f286f0ceb2d" 00:39:50.030 ], 00:39:50.030 "product_name": "NVMe disk", 00:39:50.030 "block_size": 4096, 00:39:50.030 "num_blocks": 38912, 00:39:50.030 "uuid": "73102e66-1cbc-4e45-b291-8f286f0ceb2d", 00:39:50.030 "numa_id": 0, 00:39:50.030 "assigned_rate_limits": { 00:39:50.030 "rw_ios_per_sec": 0, 00:39:50.030 "rw_mbytes_per_sec": 0, 00:39:50.030 "r_mbytes_per_sec": 0, 00:39:50.030 "w_mbytes_per_sec": 0 00:39:50.030 }, 00:39:50.030 "claimed": false, 00:39:50.030 "zoned": false, 00:39:50.030 "supported_io_types": { 00:39:50.030 "read": true, 00:39:50.030 "write": true, 00:39:50.030 "unmap": true, 00:39:50.030 "flush": true, 00:39:50.030 "reset": true, 00:39:50.030 "nvme_admin": true, 00:39:50.030 "nvme_io": true, 00:39:50.030 "nvme_io_md": false, 00:39:50.030 "write_zeroes": true, 00:39:50.030 "zcopy": false, 00:39:50.030 "get_zone_info": false, 00:39:50.030 "zone_management": false, 00:39:50.030 "zone_append": false, 00:39:50.030 "compare": true, 00:39:50.030 "compare_and_write": true, 00:39:50.030 "abort": true, 00:39:50.030 "seek_hole": false, 00:39:50.030 "seek_data": false, 00:39:50.030 "copy": true, 00:39:50.030 "nvme_iov_md": false 00:39:50.030 }, 00:39:50.030 "memory_domains": [ 00:39:50.030 { 00:39:50.030 "dma_device_id": "system", 00:39:50.030 "dma_device_type": 1 00:39:50.030 } 00:39:50.030 ], 00:39:50.030 "driver_specific": { 00:39:50.030 "nvme": [ 00:39:50.030 { 00:39:50.030 "trid": { 00:39:50.030 "trtype": "TCP", 00:39:50.030 "adrfam": "IPv4", 00:39:50.030 "traddr": "10.0.0.2", 00:39:50.030 "trsvcid": "4420", 00:39:50.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:50.030 }, 00:39:50.030 "ctrlr_data": { 00:39:50.030 "cntlid": 1, 00:39:50.030 "vendor_id": "0x8086", 00:39:50.030 "model_number": "SPDK bdev Controller", 00:39:50.030 "serial_number": "SPDK0", 00:39:50.030 "firmware_revision": "25.01", 00:39:50.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.030 "oacs": { 00:39:50.030 "security": 0, 00:39:50.030 "format": 0, 00:39:50.030 "firmware": 0, 00:39:50.030 "ns_manage": 0 00:39:50.030 }, 00:39:50.030 "multi_ctrlr": true, 00:39:50.030 "ana_reporting": false 00:39:50.030 }, 00:39:50.030 "vs": { 00:39:50.030 "nvme_version": "1.3" 00:39:50.030 }, 00:39:50.030 "ns_data": { 00:39:50.030 "id": 1, 00:39:50.030 "can_share": true 00:39:50.030 } 00:39:50.030 } 00:39:50.030 ], 00:39:50.030 "mp_policy": "active_passive" 00:39:50.030 } 00:39:50.030 } 00:39:50.030 ] 00:39:50.030 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3666855 00:39:50.030 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:50.030 00:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:50.030 Running I/O for 10 seconds... 00:39:50.963 Latency(us) 00:39:50.963 [2024-11-09T23:13:17.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.963 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:50.963 [2024-11-09T23:13:17.164Z] =================================================================================================================== 00:39:50.963 [2024-11-09T23:13:17.164Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:50.963 00:39:51.900 00:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:39:52.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.157 Nvme0n1 : 2.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:52.157 [2024-11-09T23:13:18.358Z] =================================================================================================================== 00:39:52.157 [2024-11-09T23:13:18.358Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:52.157 00:39:52.157 true 00:39:52.157 00:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:39:52.157 00:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:52.415 00:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:52.415 00:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:52.415 00:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3666855 00:39:52.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.980 Nvme0n1 : 3.00 10552.33 41.22 0.00 0.00 0.00 0.00 0.00 00:39:52.980 [2024-11-09T23:13:19.181Z] =================================================================================================================== 00:39:52.980 [2024-11-09T23:13:19.181Z] Total : 10552.33 41.22 0.00 0.00 0.00 0.00 0.00 00:39:52.980 00:39:53.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.923 Nvme0n1 : 4.00 10740.00 41.95 0.00 0.00 0.00 0.00 0.00 00:39:53.923 [2024-11-09T23:13:20.124Z] =================================================================================================================== 00:39:53.923 [2024-11-09T23:13:20.124Z] Total : 10740.00 41.95 0.00 0.00 0.00 0.00 0.00 00:39:53.923 00:39:55.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.295 Nvme0n1 : 5.00 10751.00 42.00 0.00 0.00 0.00 0.00 0.00 00:39:55.295 [2024-11-09T23:13:21.496Z] =================================================================================================================== 00:39:55.295 [2024-11-09T23:13:21.496Z] Total : 10751.00 42.00 0.00 0.00 0.00 0.00 0.00 00:39:55.295 00:39:56.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.226 Nvme0n1 : 6.00 10779.50 42.11 0.00 0.00 0.00 0.00 0.00 00:39:56.226 [2024-11-09T23:13:22.427Z] =================================================================================================================== 00:39:56.226 [2024-11-09T23:13:22.427Z] Total : 10779.50 42.11 0.00 0.00 0.00 0.00 0.00 00:39:56.226 00:39:57.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.159 Nvme0n1 : 7.00 10790.86 42.15 0.00 0.00 0.00 0.00 0.00 00:39:57.159 [2024-11-09T23:13:23.360Z] =================================================================================================================== 00:39:57.159 [2024-11-09T23:13:23.360Z] Total : 10790.86 42.15 0.00 0.00 0.00 0.00 0.00 00:39:57.159 00:39:58.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.094 Nvme0n1 : 8.00 10815.12 42.25 0.00 0.00 0.00 0.00 0.00 00:39:58.094 [2024-11-09T23:13:24.295Z] =================================================================================================================== 00:39:58.094 [2024-11-09T23:13:24.295Z] Total : 10815.12 42.25 0.00 0.00 0.00 0.00 0.00 00:39:58.094 00:39:59.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.028 Nvme0n1 : 9.00 10827.00 42.29 0.00 0.00 0.00 0.00 0.00 00:39:59.028 [2024-11-09T23:13:25.229Z] =================================================================================================================== 00:39:59.028 [2024-11-09T23:13:25.229Z] Total : 10827.00 42.29 0.00 0.00 0.00 0.00 0.00 00:39:59.028 00:39:59.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.962 Nvme0n1 : 10.00 10836.50 42.33 0.00 0.00 0.00 0.00 0.00 00:39:59.962 [2024-11-09T23:13:26.163Z] =================================================================================================================== 00:39:59.962 [2024-11-09T23:13:26.163Z] Total : 10836.50 42.33 0.00 0.00 0.00 0.00 0.00 00:39:59.962 00:39:59.962 00:39:59.962 Latency(us) 00:39:59.962 [2024-11-09T23:13:26.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.962 Nvme0n1 : 10.01 10842.33 42.35 0.00 0.00 11798.66 5898.24 27185.30 00:39:59.962 [2024-11-09T23:13:26.163Z] =================================================================================================================== 00:39:59.962 [2024-11-09T23:13:26.163Z] Total : 10842.33 42.35 0.00 0.00 11798.66 5898.24 27185.30 00:39:59.962 { 00:39:59.962 "results": [ 00:39:59.962 { 00:39:59.962 "job": "Nvme0n1", 00:39:59.962 "core_mask": "0x2", 00:39:59.962 "workload": "randwrite", 00:39:59.962 "status": "finished", 00:39:59.962 "queue_depth": 128, 00:39:59.962 "io_size": 4096, 00:39:59.962 "runtime": 10.006429, 00:39:59.962 "iops": 10842.32946638606, 00:39:59.962 "mibps": 42.352849478070546, 00:39:59.962 "io_failed": 0, 00:39:59.962 "io_timeout": 0, 00:39:59.962 "avg_latency_us": 11798.657626943674, 00:39:59.962 "min_latency_us": 5898.24, 00:39:59.962 "max_latency_us": 27185.303703703703 00:39:59.962 } 00:39:59.962 ], 00:39:59.962 "core_count": 1 00:39:59.962 } 00:39:59.962 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3666615 00:39:59.962 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3666615 ']' 00:39:59.962 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3666615 00:39:59.962 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:39:59.962 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:59.962 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3666615 00:40:00.221 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:00.221 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:00.221 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3666615' 00:40:00.221 killing process with pid 3666615 00:40:00.221 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3666615 00:40:00.221 Received shutdown signal, test time was about 10.000000 seconds 00:40:00.221 00:40:00.221 Latency(us) 00:40:00.221 [2024-11-09T23:13:26.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:00.221 [2024-11-09T23:13:26.422Z] =================================================================================================================== 00:40:00.221 [2024-11-09T23:13:26.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:00.221 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3666615 00:40:01.155 00:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:01.155 00:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:01.412 00:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:01.412 00:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:01.979 00:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:01.979 00:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:01.979 00:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:01.979 [2024-11-10 00:13:28.139350] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:01.979 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:01.980 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:02.242 request: 00:40:02.242 { 00:40:02.242 "uuid": "6ca60c3b-57a0-425b-b041-9bf79b58324e", 00:40:02.242 "method": "bdev_lvol_get_lvstores", 00:40:02.242 "req_id": 1 00:40:02.242 } 00:40:02.242 Got JSON-RPC error response 00:40:02.242 response: 00:40:02.242 { 00:40:02.242 "code": -19, 00:40:02.242 "message": "No such device" 00:40:02.242 } 00:40:02.501 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:40:02.501 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:02.501 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:02.501 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:02.501 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:02.759 aio_bdev 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73102e66-1cbc-4e45-b291-8f286f0ceb2d 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=73102e66-1cbc-4e45-b291-8f286f0ceb2d 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:40:02.759 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:03.017 00:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 73102e66-1cbc-4e45-b291-8f286f0ceb2d -t 2000 00:40:03.275 [ 00:40:03.275 { 00:40:03.275 "name": "73102e66-1cbc-4e45-b291-8f286f0ceb2d", 00:40:03.275 "aliases": [ 00:40:03.275 "lvs/lvol" 00:40:03.275 ], 00:40:03.275 "product_name": "Logical Volume", 00:40:03.275 "block_size": 4096, 00:40:03.275 "num_blocks": 38912, 00:40:03.275 "uuid": "73102e66-1cbc-4e45-b291-8f286f0ceb2d", 00:40:03.275 "assigned_rate_limits": { 00:40:03.275 "rw_ios_per_sec": 0, 00:40:03.275 "rw_mbytes_per_sec": 0, 00:40:03.275 "r_mbytes_per_sec": 0, 00:40:03.275 "w_mbytes_per_sec": 0 00:40:03.275 }, 00:40:03.275 "claimed": false, 00:40:03.275 "zoned": false, 00:40:03.275 "supported_io_types": { 00:40:03.275 "read": true, 00:40:03.275 "write": true, 00:40:03.275 "unmap": true, 00:40:03.275 "flush": false, 00:40:03.275 "reset": true, 00:40:03.275 "nvme_admin": false, 00:40:03.275 "nvme_io": false, 00:40:03.275 "nvme_io_md": false, 00:40:03.275 "write_zeroes": true, 00:40:03.275 "zcopy": false, 00:40:03.275 "get_zone_info": false, 00:40:03.275 "zone_management": false, 00:40:03.275 "zone_append": false, 00:40:03.275 "compare": false, 00:40:03.275 "compare_and_write": false, 00:40:03.275 "abort": false, 00:40:03.275 "seek_hole": true, 00:40:03.275 "seek_data": true, 00:40:03.275 "copy": false, 00:40:03.275 "nvme_iov_md": false 00:40:03.275 }, 00:40:03.275 "driver_specific": { 00:40:03.275 "lvol": { 00:40:03.275 "lvol_store_uuid": "6ca60c3b-57a0-425b-b041-9bf79b58324e", 00:40:03.275 "base_bdev": "aio_bdev", 00:40:03.275 "thin_provision": false, 00:40:03.275 "num_allocated_clusters": 38, 00:40:03.275 "snapshot": false, 00:40:03.275 "clone": false, 00:40:03.275 "esnap_clone": false 00:40:03.275 } 00:40:03.275 } 00:40:03.275 } 00:40:03.275 ] 00:40:03.275 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:40:03.275 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:03.275 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:03.533 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:03.533 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:03.533 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:03.791 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:03.791 00:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 73102e66-1cbc-4e45-b291-8f286f0ceb2d 00:40:04.050 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ca60c3b-57a0-425b-b041-9bf79b58324e 00:40:04.307 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:04.565 00:40:04.565 real 0m19.436s 00:40:04.565 user 0m19.220s 00:40:04.565 sys 0m1.880s 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.565 ************************************ 00:40:04.565 END TEST lvs_grow_clean 00:40:04.565 ************************************ 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:04.565 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:04.822 ************************************ 00:40:04.822 START TEST lvs_grow_dirty 00:40:04.822 ************************************ 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:04.822 00:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:05.080 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:05.080 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:05.339 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:05.339 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:05.339 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:05.596 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:05.597 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:05.597 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af3c5408-5db3-4b17-96ef-75a84ebaeeca lvol 150 00:40:05.854 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:05.854 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:05.854 00:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:06.112 [2024-11-10 00:13:32.175160] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:06.112 [2024-11-10 00:13:32.175298] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:06.112 true 00:40:06.112 00:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:06.112 00:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:06.371 00:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:06.371 00:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:06.628 00:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:06.886 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:07.144 [2024-11-10 00:13:33.267682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.144 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3668886 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3668886 /var/tmp/bdevperf.sock 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3668886 ']' 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:07.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:07.402 00:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:07.659 [2024-11-10 00:13:33.641373] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:07.659 [2024-11-10 00:13:33.641507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668886 ] 00:40:07.659 [2024-11-10 00:13:33.779826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.917 [2024-11-10 00:13:33.903797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.482 00:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:08.482 00:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:40:08.482 00:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:09.047 Nvme0n1 00:40:09.047 00:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:09.305 [ 00:40:09.306 { 00:40:09.306 "name": "Nvme0n1", 00:40:09.306 "aliases": [ 00:40:09.306 "98cce6ce-d4a9-428a-8844-8674ab31ab71" 00:40:09.306 ], 00:40:09.306 "product_name": "NVMe disk", 00:40:09.306 "block_size": 4096, 00:40:09.306 "num_blocks": 38912, 00:40:09.306 "uuid": "98cce6ce-d4a9-428a-8844-8674ab31ab71", 00:40:09.306 "numa_id": 0, 00:40:09.306 "assigned_rate_limits": { 00:40:09.306 "rw_ios_per_sec": 0, 00:40:09.306 "rw_mbytes_per_sec": 0, 00:40:09.306 "r_mbytes_per_sec": 0, 00:40:09.306 "w_mbytes_per_sec": 0 00:40:09.306 }, 00:40:09.306 "claimed": false, 00:40:09.306 "zoned": false, 00:40:09.306 "supported_io_types": { 00:40:09.306 "read": true, 00:40:09.306 "write": true, 00:40:09.306 "unmap": true, 00:40:09.306 "flush": true, 00:40:09.306 "reset": true, 00:40:09.306 "nvme_admin": true, 00:40:09.306 "nvme_io": true, 00:40:09.306 "nvme_io_md": false, 00:40:09.306 "write_zeroes": true, 00:40:09.306 "zcopy": false, 00:40:09.306 "get_zone_info": false, 00:40:09.306 "zone_management": false, 00:40:09.306 "zone_append": false, 00:40:09.306 "compare": true, 00:40:09.306 "compare_and_write": true, 00:40:09.306 "abort": true, 00:40:09.306 "seek_hole": false, 00:40:09.306 "seek_data": false, 00:40:09.306 "copy": true, 00:40:09.306 "nvme_iov_md": false 00:40:09.306 }, 00:40:09.306 "memory_domains": [ 00:40:09.306 { 00:40:09.306 "dma_device_id": "system", 00:40:09.306 "dma_device_type": 1 00:40:09.306 } 00:40:09.306 ], 00:40:09.306 "driver_specific": { 00:40:09.306 "nvme": [ 00:40:09.306 { 00:40:09.306 "trid": { 00:40:09.306 "trtype": "TCP", 00:40:09.306 "adrfam": "IPv4", 00:40:09.306 "traddr": "10.0.0.2", 00:40:09.306 "trsvcid": "4420", 00:40:09.306 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:09.306 }, 00:40:09.306 "ctrlr_data": { 00:40:09.306 "cntlid": 1, 00:40:09.306 "vendor_id": "0x8086", 00:40:09.306 "model_number": "SPDK bdev Controller", 00:40:09.306 "serial_number": "SPDK0", 00:40:09.306 "firmware_revision": "25.01", 00:40:09.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:09.306 "oacs": { 00:40:09.306 "security": 0, 00:40:09.306 "format": 0, 00:40:09.306 "firmware": 0, 00:40:09.306 "ns_manage": 0 00:40:09.306 }, 00:40:09.306 "multi_ctrlr": true, 00:40:09.306 "ana_reporting": false 00:40:09.306 }, 00:40:09.306 "vs": { 00:40:09.306 "nvme_version": "1.3" 00:40:09.306 }, 00:40:09.306 "ns_data": { 00:40:09.306 "id": 1, 00:40:09.306 "can_share": true 00:40:09.306 } 00:40:09.306 } 00:40:09.306 ], 00:40:09.306 "mp_policy": "active_passive" 00:40:09.306 } 00:40:09.306 } 00:40:09.306 ] 00:40:09.306 00:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3669143 00:40:09.306 00:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:09.306 00:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:09.306 Running I/O for 10 seconds... 00:40:10.678 Latency(us) 00:40:10.678 [2024-11-09T23:13:36.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.679 Nvme0n1 : 1.00 10448.00 40.81 0.00 0.00 0.00 0.00 0.00 00:40:10.679 [2024-11-09T23:13:36.880Z] =================================================================================================================== 00:40:10.679 [2024-11-09T23:13:36.880Z] Total : 10448.00 40.81 0.00 0.00 0.00 0.00 0.00 00:40:10.679 00:40:11.244 00:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:11.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.502 Nvme0n1 : 2.00 10558.00 41.24 0.00 0.00 0.00 0.00 0.00 00:40:11.502 [2024-11-09T23:13:37.703Z] =================================================================================================================== 00:40:11.502 [2024-11-09T23:13:37.703Z] Total : 10558.00 41.24 0.00 0.00 0.00 0.00 0.00 00:40:11.502 00:40:11.502 true 00:40:11.502 00:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:11.502 00:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:11.759 00:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:11.759 00:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:11.759 00:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3669143 00:40:12.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.324 Nvme0n1 : 3.00 10679.33 41.72 0.00 0.00 0.00 0.00 0.00 00:40:12.324 [2024-11-09T23:13:38.525Z] =================================================================================================================== 00:40:12.324 [2024-11-09T23:13:38.525Z] Total : 10679.33 41.72 0.00 0.00 0.00 0.00 0.00 00:40:12.324 00:40:13.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:13.257 Nvme0n1 : 4.00 10867.00 42.45 0.00 0.00 0.00 0.00 0.00 00:40:13.257 [2024-11-09T23:13:39.458Z] =================================================================================================================== 00:40:13.257 [2024-11-09T23:13:39.458Z] Total : 10867.00 42.45 0.00 0.00 0.00 0.00 0.00 00:40:13.257 00:40:14.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:14.686 Nvme0n1 : 5.00 10852.60 42.39 0.00 0.00 0.00 0.00 0.00 00:40:14.686 [2024-11-09T23:13:40.887Z] =================================================================================================================== 00:40:14.686 [2024-11-09T23:13:40.887Z] Total : 10852.60 42.39 0.00 0.00 0.00 0.00 0.00 00:40:14.686 00:40:15.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.620 Nvme0n1 : 6.00 10927.67 42.69 0.00 0.00 0.00 0.00 0.00 00:40:15.620 [2024-11-09T23:13:41.821Z] =================================================================================================================== 00:40:15.620 [2024-11-09T23:13:41.821Z] Total : 10927.67 42.69 0.00 0.00 0.00 0.00 0.00 00:40:15.620 00:40:16.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:16.554 Nvme0n1 : 7.00 10949.86 42.77 0.00 0.00 0.00 0.00 0.00 00:40:16.554 [2024-11-09T23:13:42.755Z] =================================================================================================================== 00:40:16.554 [2024-11-09T23:13:42.755Z] Total : 10949.86 42.77 0.00 0.00 0.00 0.00 0.00 00:40:16.554 00:40:17.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:17.487 Nvme0n1 : 8.00 10946.38 42.76 0.00 0.00 0.00 0.00 0.00 00:40:17.487 [2024-11-09T23:13:43.688Z] =================================================================================================================== 00:40:17.487 [2024-11-09T23:13:43.688Z] Total : 10946.38 42.76 0.00 0.00 0.00 0.00 0.00 00:40:17.487 00:40:18.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.420 Nvme0n1 : 9.00 10943.67 42.75 0.00 0.00 0.00 0.00 0.00 00:40:18.420 [2024-11-09T23:13:44.621Z] =================================================================================================================== 00:40:18.420 [2024-11-09T23:13:44.621Z] Total : 10943.67 42.75 0.00 0.00 0.00 0.00 0.00 00:40:18.420 00:40:19.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:19.353 Nvme0n1 : 10.00 10941.50 42.74 0.00 0.00 0.00 0.00 0.00 00:40:19.353 [2024-11-09T23:13:45.554Z] =================================================================================================================== 00:40:19.353 [2024-11-09T23:13:45.554Z] Total : 10941.50 42.74 0.00 0.00 0.00 0.00 0.00 00:40:19.353 00:40:19.353 00:40:19.353 Latency(us) 00:40:19.353 [2024-11-09T23:13:45.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:19.353 Nvme0n1 : 10.01 10943.22 42.75 0.00 0.00 11690.00 5582.70 26408.58 00:40:19.353 [2024-11-09T23:13:45.554Z] =================================================================================================================== 00:40:19.353 [2024-11-09T23:13:45.554Z] Total : 10943.22 42.75 0.00 0.00 11690.00 5582.70 26408.58 00:40:19.353 { 00:40:19.353 "results": [ 00:40:19.353 { 00:40:19.353 "job": "Nvme0n1", 00:40:19.353 "core_mask": "0x2", 00:40:19.353 "workload": "randwrite", 00:40:19.353 "status": "finished", 00:40:19.353 "queue_depth": 128, 00:40:19.353 "io_size": 4096, 00:40:19.353 "runtime": 10.010129, 00:40:19.353 "iops": 10943.21561690164, 00:40:19.353 "mibps": 42.74693600352203, 00:40:19.353 "io_failed": 0, 00:40:19.353 "io_timeout": 0, 00:40:19.353 "avg_latency_us": 11690.002814954114, 00:40:19.353 "min_latency_us": 5582.696296296296, 00:40:19.353 "max_latency_us": 26408.58074074074 00:40:19.353 } 00:40:19.353 ], 00:40:19.353 "core_count": 1 00:40:19.353 } 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3668886 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3668886 ']' 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3668886 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3668886 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3668886' 00:40:19.353 killing process with pid 3668886 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3668886 00:40:19.353 Received shutdown signal, test time was about 10.000000 seconds 00:40:19.353 00:40:19.353 Latency(us) 00:40:19.353 [2024-11-09T23:13:45.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.353 [2024-11-09T23:13:45.554Z] =================================================================================================================== 00:40:19.353 [2024-11-09T23:13:45.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:19.353 00:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3668886 00:40:20.286 00:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:20.543 00:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:20.801 00:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:20.801 00:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:21.058 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:21.058 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:21.058 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3666147 00:40:21.058 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3666147 00:40:21.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3666147 Killed "${NVMF_APP[@]}" "$@" 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3670477 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3670477 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3670477 ']' 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:21.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:21.317 00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:21.317 [2024-11-10 00:13:47.394545] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:21.317 [2024-11-10 00:13:47.397175] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:21.317 [2024-11-10 00:13:47.397267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:21.576 [2024-11-10 00:13:47.548514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:21.576 [2024-11-10 00:13:47.682086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:21.576 [2024-11-10 00:13:47.682169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:21.576 [2024-11-10 00:13:47.682198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:21.576 [2024-11-10 00:13:47.682219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:21.576 [2024-11-10 00:13:47.682241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:21.576 [2024-11-10 00:13:47.683872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.142 [2024-11-10 00:13:48.038028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:22.142 [2024-11-10 00:13:48.038421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:22.400 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:22.658 [2024-11-10 00:13:48.635787] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:22.658 [2024-11-10 00:13:48.636010] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:22.658 [2024-11-10 00:13:48.636082] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:40:22.658 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:22.915 00:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98cce6ce-d4a9-428a-8844-8674ab31ab71 -t 2000 00:40:23.173 [ 00:40:23.173 { 00:40:23.173 "name": "98cce6ce-d4a9-428a-8844-8674ab31ab71", 00:40:23.173 "aliases": [ 00:40:23.173 "lvs/lvol" 00:40:23.173 ], 00:40:23.173 "product_name": "Logical Volume", 00:40:23.173 "block_size": 4096, 00:40:23.173 "num_blocks": 38912, 00:40:23.173 "uuid": "98cce6ce-d4a9-428a-8844-8674ab31ab71", 00:40:23.173 "assigned_rate_limits": { 00:40:23.173 "rw_ios_per_sec": 0, 00:40:23.173 "rw_mbytes_per_sec": 0, 00:40:23.173 "r_mbytes_per_sec": 0, 00:40:23.173 "w_mbytes_per_sec": 0 00:40:23.173 }, 00:40:23.173 "claimed": false, 00:40:23.173 "zoned": false, 00:40:23.173 "supported_io_types": { 00:40:23.173 "read": true, 00:40:23.173 "write": true, 00:40:23.173 "unmap": true, 00:40:23.173 "flush": false, 00:40:23.173 "reset": true, 00:40:23.173 "nvme_admin": false, 00:40:23.173 "nvme_io": false, 00:40:23.173 "nvme_io_md": false, 00:40:23.173 "write_zeroes": true, 00:40:23.173 "zcopy": false, 00:40:23.173 "get_zone_info": false, 00:40:23.173 "zone_management": false, 00:40:23.173 "zone_append": false, 00:40:23.173 "compare": false, 00:40:23.173 "compare_and_write": false, 00:40:23.173 "abort": false, 00:40:23.173 "seek_hole": true, 00:40:23.173 "seek_data": true, 00:40:23.173 "copy": false, 00:40:23.173 "nvme_iov_md": false 00:40:23.173 }, 00:40:23.173 "driver_specific": { 00:40:23.173 "lvol": { 00:40:23.173 "lvol_store_uuid": "af3c5408-5db3-4b17-96ef-75a84ebaeeca", 00:40:23.173 "base_bdev": "aio_bdev", 00:40:23.173 "thin_provision": false, 00:40:23.173 "num_allocated_clusters": 38, 00:40:23.173 "snapshot": false, 00:40:23.173 "clone": false, 00:40:23.173 "esnap_clone": false 00:40:23.173 } 00:40:23.173 } 00:40:23.173 } 00:40:23.173 ] 00:40:23.173 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:40:23.173 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:23.173 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:23.431 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:23.431 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:23.431 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:23.689 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:23.689 00:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:23.947 [2024-11-10 00:13:50.012942] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:23.947 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:24.206 request: 00:40:24.206 { 00:40:24.206 "uuid": "af3c5408-5db3-4b17-96ef-75a84ebaeeca", 00:40:24.206 "method": "bdev_lvol_get_lvstores", 00:40:24.206 "req_id": 1 00:40:24.206 } 00:40:24.206 Got JSON-RPC error response 00:40:24.206 response: 00:40:24.206 { 00:40:24.206 "code": -19, 00:40:24.206 "message": "No such device" 00:40:24.206 } 00:40:24.206 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:40:24.206 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:24.206 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:24.206 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:24.206 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:24.463 aio_bdev 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:40:24.463 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:25.028 00:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98cce6ce-d4a9-428a-8844-8674ab31ab71 -t 2000 00:40:25.028 [ 00:40:25.028 { 00:40:25.028 "name": "98cce6ce-d4a9-428a-8844-8674ab31ab71", 00:40:25.028 "aliases": [ 00:40:25.028 "lvs/lvol" 00:40:25.028 ], 00:40:25.028 "product_name": "Logical Volume", 00:40:25.028 "block_size": 4096, 00:40:25.028 "num_blocks": 38912, 00:40:25.028 "uuid": "98cce6ce-d4a9-428a-8844-8674ab31ab71", 00:40:25.028 "assigned_rate_limits": { 00:40:25.028 "rw_ios_per_sec": 0, 00:40:25.028 "rw_mbytes_per_sec": 0, 00:40:25.028 "r_mbytes_per_sec": 0, 00:40:25.028 "w_mbytes_per_sec": 0 00:40:25.028 }, 00:40:25.028 "claimed": false, 00:40:25.028 "zoned": false, 00:40:25.028 "supported_io_types": { 00:40:25.028 "read": true, 00:40:25.028 "write": true, 00:40:25.028 "unmap": true, 00:40:25.028 "flush": false, 00:40:25.028 "reset": true, 00:40:25.028 "nvme_admin": false, 00:40:25.028 "nvme_io": false, 00:40:25.028 "nvme_io_md": false, 00:40:25.028 "write_zeroes": true, 00:40:25.028 "zcopy": false, 00:40:25.028 "get_zone_info": false, 00:40:25.028 "zone_management": false, 00:40:25.028 "zone_append": false, 00:40:25.028 "compare": false, 00:40:25.028 "compare_and_write": false, 00:40:25.028 "abort": false, 00:40:25.028 "seek_hole": true, 00:40:25.028 "seek_data": true, 00:40:25.028 "copy": false, 00:40:25.028 "nvme_iov_md": false 00:40:25.028 }, 00:40:25.028 "driver_specific": { 00:40:25.028 "lvol": { 00:40:25.028 "lvol_store_uuid": "af3c5408-5db3-4b17-96ef-75a84ebaeeca", 00:40:25.028 "base_bdev": "aio_bdev", 00:40:25.028 "thin_provision": false, 00:40:25.028 "num_allocated_clusters": 38, 00:40:25.028 "snapshot": false, 00:40:25.028 "clone": false, 00:40:25.028 "esnap_clone": false 00:40:25.028 } 00:40:25.028 } 00:40:25.028 } 00:40:25.028 ] 00:40:25.028 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:40:25.028 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:25.028 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:25.286 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:25.287 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:25.287 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:25.853 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:25.853 00:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98cce6ce-d4a9-428a-8844-8674ab31ab71 00:40:25.853 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af3c5408-5db3-4b17-96ef-75a84ebaeeca 00:40:26.114 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:26.374 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:26.632 00:40:26.632 real 0m21.814s 00:40:26.632 user 0m39.100s 00:40:26.632 sys 0m4.740s 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:26.632 ************************************ 00:40:26.632 END TEST lvs_grow_dirty 00:40:26.632 ************************************ 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:40:26.632 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:26.633 nvmf_trace.0 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:26.633 rmmod nvme_tcp 00:40:26.633 rmmod nvme_fabrics 00:40:26.633 rmmod nvme_keyring 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3670477 ']' 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3670477 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3670477 ']' 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3670477 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3670477 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3670477' 00:40:26.633 killing process with pid 3670477 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3670477 00:40:26.633 00:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3670477 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:28.006 00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.908 00:40:29.908 real 0m48.192s 00:40:29.908 user 1m1.453s 00:40:29.908 sys 0m8.670s 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:29.908 ************************************ 00:40:29.908 END TEST nvmf_lvs_grow 00:40:29.908 ************************************ 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:29.908 ************************************ 00:40:29.908 START TEST nvmf_bdev_io_wait 00:40:29.908 ************************************ 00:40:29.908 00:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:29.908 * Looking for test storage... 00:40:29.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.908 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:29.908 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.909 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:30.167 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.168 --rc genhtml_branch_coverage=1 00:40:30.168 --rc genhtml_function_coverage=1 00:40:30.168 --rc genhtml_legend=1 00:40:30.168 --rc geninfo_all_blocks=1 00:40:30.168 --rc geninfo_unexecuted_blocks=1 00:40:30.168 00:40:30.168 ' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.168 --rc genhtml_branch_coverage=1 00:40:30.168 --rc genhtml_function_coverage=1 00:40:30.168 --rc genhtml_legend=1 00:40:30.168 --rc geninfo_all_blocks=1 00:40:30.168 --rc geninfo_unexecuted_blocks=1 00:40:30.168 00:40:30.168 ' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.168 --rc genhtml_branch_coverage=1 00:40:30.168 --rc genhtml_function_coverage=1 00:40:30.168 --rc genhtml_legend=1 00:40:30.168 --rc geninfo_all_blocks=1 00:40:30.168 --rc geninfo_unexecuted_blocks=1 00:40:30.168 00:40:30.168 ' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.168 --rc genhtml_branch_coverage=1 00:40:30.168 --rc genhtml_function_coverage=1 00:40:30.168 --rc genhtml_legend=1 00:40:30.168 --rc geninfo_all_blocks=1 00:40:30.168 --rc geninfo_unexecuted_blocks=1 00:40:30.168 00:40:30.168 ' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:30.168 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:30.169 00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:32.069 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:32.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:32.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:32.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:32.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:32.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:40:32.070 00:40:32.070 --- 10.0.0.2 ping statistics --- 00:40:32.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.070 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:32.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:40:32.070 00:40:32.070 --- 10.0.0.1 ping statistics --- 00:40:32.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.070 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3673256 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3673256 00:40:32.070 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3673256 ']' 00:40:32.071 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.071 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:32.071 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.071 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:32.071 00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:32.329 [2024-11-10 00:13:58.303926] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:32.329 [2024-11-10 00:13:58.306415] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:32.329 [2024-11-10 00:13:58.306531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:32.329 [2024-11-10 00:13:58.463835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:32.588 [2024-11-10 00:13:58.605713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:32.588 [2024-11-10 00:13:58.605781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:32.588 [2024-11-10 00:13:58.605809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:32.588 [2024-11-10 00:13:58.605830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:32.588 [2024-11-10 00:13:58.605852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:32.588 [2024-11-10 00:13:58.608661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.588 [2024-11-10 00:13:58.608695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:32.588 [2024-11-10 00:13:58.608756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.588 [2024-11-10 00:13:58.608766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:32.588 [2024-11-10 00:13:58.609465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.523 [2024-11-10 00:13:59.645158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:33.523 [2024-11-10 00:13:59.646273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:33.523 [2024-11-10 00:13:59.647513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:33.523 [2024-11-10 00:13:59.648623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.523 [2024-11-10 00:13:59.653866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.523 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.781 Malloc0 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.781 [2024-11-10 00:13:59.778031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3673424 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3673426 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.781 { 00:40:33.781 "params": { 00:40:33.781 "name": "Nvme$subsystem", 00:40:33.781 "trtype": "$TEST_TRANSPORT", 00:40:33.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.781 "adrfam": "ipv4", 00:40:33.781 "trsvcid": "$NVMF_PORT", 00:40:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.781 "hdgst": ${hdgst:-false}, 00:40:33.781 "ddgst": ${ddgst:-false} 00:40:33.781 }, 00:40:33.781 "method": "bdev_nvme_attach_controller" 00:40:33.781 } 00:40:33.781 EOF 00:40:33.781 )") 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3673428 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.781 { 00:40:33.781 "params": { 00:40:33.781 "name": "Nvme$subsystem", 00:40:33.781 "trtype": "$TEST_TRANSPORT", 00:40:33.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.781 "adrfam": "ipv4", 00:40:33.781 "trsvcid": "$NVMF_PORT", 00:40:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.781 "hdgst": ${hdgst:-false}, 00:40:33.781 "ddgst": ${ddgst:-false} 00:40:33.781 }, 00:40:33.781 "method": "bdev_nvme_attach_controller" 00:40:33.781 } 00:40:33.781 EOF 00:40:33.781 )") 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3673431 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.781 { 00:40:33.781 "params": { 00:40:33.781 "name": "Nvme$subsystem", 00:40:33.781 "trtype": "$TEST_TRANSPORT", 00:40:33.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.781 "adrfam": "ipv4", 00:40:33.781 "trsvcid": "$NVMF_PORT", 00:40:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.781 "hdgst": ${hdgst:-false}, 00:40:33.781 "ddgst": ${ddgst:-false} 00:40:33.781 }, 00:40:33.781 "method": "bdev_nvme_attach_controller" 00:40:33.781 } 00:40:33.781 EOF 00:40:33.781 )") 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.781 { 00:40:33.781 "params": { 00:40:33.781 "name": "Nvme$subsystem", 00:40:33.781 "trtype": "$TEST_TRANSPORT", 00:40:33.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.781 "adrfam": "ipv4", 00:40:33.781 "trsvcid": "$NVMF_PORT", 00:40:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.781 "hdgst": ${hdgst:-false}, 00:40:33.781 "ddgst": ${ddgst:-false} 00:40:33.781 }, 00:40:33.781 "method": "bdev_nvme_attach_controller" 00:40:33.781 } 00:40:33.781 EOF 00:40:33.781 )") 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3673424 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.781 "params": { 00:40:33.781 "name": "Nvme1", 00:40:33.781 "trtype": "tcp", 00:40:33.781 "traddr": "10.0.0.2", 00:40:33.781 "adrfam": "ipv4", 00:40:33.781 "trsvcid": "4420", 00:40:33.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:33.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:33.781 "hdgst": false, 00:40:33.781 "ddgst": false 00:40:33.781 }, 00:40:33.781 "method": "bdev_nvme_attach_controller" 00:40:33.781 }' 00:40:33.781 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.782 "params": { 00:40:33.782 "name": "Nvme1", 00:40:33.782 "trtype": "tcp", 00:40:33.782 "traddr": "10.0.0.2", 00:40:33.782 "adrfam": "ipv4", 00:40:33.782 "trsvcid": "4420", 00:40:33.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:33.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:33.782 "hdgst": false, 00:40:33.782 "ddgst": false 00:40:33.782 }, 00:40:33.782 "method": "bdev_nvme_attach_controller" 00:40:33.782 }' 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.782 "params": { 00:40:33.782 "name": "Nvme1", 00:40:33.782 "trtype": "tcp", 00:40:33.782 "traddr": "10.0.0.2", 00:40:33.782 "adrfam": "ipv4", 00:40:33.782 "trsvcid": "4420", 00:40:33.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:33.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:33.782 "hdgst": false, 00:40:33.782 "ddgst": false 00:40:33.782 }, 00:40:33.782 "method": "bdev_nvme_attach_controller" 00:40:33.782 }' 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:33.782 00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.782 "params": { 00:40:33.782 "name": "Nvme1", 00:40:33.782 "trtype": "tcp", 00:40:33.782 "traddr": "10.0.0.2", 00:40:33.782 "adrfam": "ipv4", 00:40:33.782 "trsvcid": "4420", 00:40:33.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:33.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:33.782 "hdgst": false, 00:40:33.782 "ddgst": false 00:40:33.782 }, 00:40:33.782 "method": "bdev_nvme_attach_controller" 00:40:33.782 }' 00:40:33.782 [2024-11-10 00:13:59.869096] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:33.782 [2024-11-10 00:13:59.869096] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:33.782 [2024-11-10 00:13:59.869096] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:33.782 [2024-11-10 00:13:59.869103] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:33.782 [2024-11-10 00:13:59.869262] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:33.782 [2024-11-10 00:13:59.869268] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-10 00:13:59.869271] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-10 00:13:59.869268] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:33.782 --proc-type=auto ] 00:40:33.782 --proc-type=auto ] 00:40:34.039 [2024-11-10 00:14:00.131021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.039 [2024-11-10 00:14:00.237973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.297 [2024-11-10 00:14:00.253674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:34.297 [2024-11-10 00:14:00.343695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.297 [2024-11-10 00:14:00.358991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:34.297 [2024-11-10 00:14:00.414087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.297 [2024-11-10 00:14:00.464617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:34.556 [2024-11-10 00:14:00.530866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:34.556 Running I/O for 1 seconds... 00:40:34.813 Running I/O for 1 seconds... 00:40:34.813 Running I/O for 1 seconds... 00:40:35.071 Running I/O for 1 seconds... 00:40:35.639 136096.00 IOPS, 531.62 MiB/s 00:40:35.639 Latency(us) 00:40:35.639 [2024-11-09T23:14:01.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.639 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:35.639 Nvme1n1 : 1.00 135785.99 530.41 0.00 0.00 937.72 436.91 2233.08 00:40:35.639 [2024-11-09T23:14:01.840Z] =================================================================================================================== 00:40:35.639 [2024-11-09T23:14:01.840Z] Total : 135785.99 530.41 0.00 0.00 937.72 436.91 2233.08 00:40:35.639 8145.00 IOPS, 31.82 MiB/s 00:40:35.639 Latency(us) 00:40:35.639 [2024-11-09T23:14:01.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.639 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:35.639 Nvme1n1 : 1.01 8190.58 31.99 0.00 0.00 15543.91 6068.15 20000.62 00:40:35.639 [2024-11-09T23:14:01.840Z] =================================================================================================================== 00:40:35.639 [2024-11-09T23:14:01.840Z] Total : 8190.58 31.99 0.00 0.00 15543.91 6068.15 20000.62 00:40:35.897 6551.00 IOPS, 25.59 MiB/s 00:40:35.897 Latency(us) 00:40:35.897 [2024-11-09T23:14:02.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.897 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:35.897 Nvme1n1 : 1.01 6622.03 25.87 0.00 0.00 19227.07 7767.23 27573.67 00:40:35.897 [2024-11-09T23:14:02.098Z] =================================================================================================================== 00:40:35.897 [2024-11-09T23:14:02.098Z] Total : 6622.03 25.87 0.00 0.00 19227.07 7767.23 27573.67 00:40:35.897 7540.00 IOPS, 29.45 MiB/s 00:40:35.897 Latency(us) 00:40:35.897 [2024-11-09T23:14:02.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.897 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:35.897 Nvme1n1 : 1.01 7615.36 29.75 0.00 0.00 16730.56 3616.62 26020.22 00:40:35.897 [2024-11-09T23:14:02.098Z] =================================================================================================================== 00:40:35.897 [2024-11-09T23:14:02.098Z] Total : 7615.36 29.75 0.00 0.00 16730.56 3616.62 26020.22 00:40:36.464 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3673426 00:40:36.464 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3673428 00:40:36.464 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3673431 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:36.722 rmmod nvme_tcp 00:40:36.722 rmmod nvme_fabrics 00:40:36.722 rmmod nvme_keyring 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3673256 ']' 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3673256 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3673256 ']' 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3673256 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3673256 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3673256' 00:40:36.722 killing process with pid 3673256 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3673256 00:40:36.722 00:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3673256 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.095 00:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:39.997 00:40:39.997 real 0m9.938s 00:40:39.997 user 0m21.661s 00:40:39.997 sys 0m5.038s 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:39.997 ************************************ 00:40:39.997 END TEST nvmf_bdev_io_wait 00:40:39.997 ************************************ 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:39.997 ************************************ 00:40:39.997 START TEST nvmf_queue_depth 00:40:39.997 ************************************ 00:40:39.997 00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:39.997 * Looking for test storage... 00:40:39.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:39.997 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.998 --rc genhtml_branch_coverage=1 00:40:39.998 --rc genhtml_function_coverage=1 00:40:39.998 --rc genhtml_legend=1 00:40:39.998 --rc geninfo_all_blocks=1 00:40:39.998 --rc geninfo_unexecuted_blocks=1 00:40:39.998 00:40:39.998 ' 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.998 --rc genhtml_branch_coverage=1 00:40:39.998 --rc genhtml_function_coverage=1 00:40:39.998 --rc genhtml_legend=1 00:40:39.998 --rc geninfo_all_blocks=1 00:40:39.998 --rc geninfo_unexecuted_blocks=1 00:40:39.998 00:40:39.998 ' 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.998 --rc genhtml_branch_coverage=1 00:40:39.998 --rc genhtml_function_coverage=1 00:40:39.998 --rc genhtml_legend=1 00:40:39.998 --rc geninfo_all_blocks=1 00:40:39.998 --rc geninfo_unexecuted_blocks=1 00:40:39.998 00:40:39.998 ' 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.998 --rc genhtml_branch_coverage=1 00:40:39.998 --rc genhtml_function_coverage=1 00:40:39.998 --rc genhtml_legend=1 00:40:39.998 --rc geninfo_all_blocks=1 00:40:39.998 --rc geninfo_unexecuted_blocks=1 00:40:39.998 00:40:39.998 ' 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.998 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:39.999 00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:42.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:42.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:42.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:42.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:42.529 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:42.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:42.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:40:42.530 00:40:42.530 --- 10.0.0.2 ping statistics --- 00:40:42.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.530 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:42.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:42.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:40:42.530 00:40:42.530 --- 10.0.0.1 ping statistics --- 00:40:42.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.530 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3675903 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3675903 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3675903 ']' 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:42.530 00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:42.530 [2024-11-10 00:14:08.373626] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:42.530 [2024-11-10 00:14:08.376239] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:42.530 [2024-11-10 00:14:08.376363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:42.530 [2024-11-10 00:14:08.535951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.530 [2024-11-10 00:14:08.673181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:42.530 [2024-11-10 00:14:08.673269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:42.530 [2024-11-10 00:14:08.673298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:42.530 [2024-11-10 00:14:08.673319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:42.530 [2024-11-10 00:14:08.673341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:42.530 [2024-11-10 00:14:08.674979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:43.095 [2024-11-10 00:14:09.047543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:43.095 [2024-11-10 00:14:09.048010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 [2024-11-10 00:14:09.400036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 Malloc0 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 [2024-11-10 00:14:09.532218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3676060 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3676060 /var/tmp/bdevperf.sock 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3676060 ']' 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:43.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:43.354 00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:43.611 [2024-11-10 00:14:09.625470] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:40:43.611 [2024-11-10 00:14:09.625633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676060 ] 00:40:43.611 [2024-11-10 00:14:09.780055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.868 [2024-11-10 00:14:09.915367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.434 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:44.434 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:40:44.434 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:44.434 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.434 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:44.692 NVMe0n1 00:40:44.692 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.692 00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:44.692 Running I/O for 10 seconds... 00:40:46.994 6085.00 IOPS, 23.77 MiB/s [2024-11-09T23:14:14.184Z] 6144.00 IOPS, 24.00 MiB/s [2024-11-09T23:14:15.122Z] 6144.00 IOPS, 24.00 MiB/s [2024-11-09T23:14:16.056Z] 6144.00 IOPS, 24.00 MiB/s [2024-11-09T23:14:16.988Z] 6130.60 IOPS, 23.95 MiB/s [2024-11-09T23:14:17.930Z] 6125.17 IOPS, 23.93 MiB/s [2024-11-09T23:14:18.868Z] 6143.86 IOPS, 24.00 MiB/s [2024-11-09T23:14:20.240Z] 6144.00 IOPS, 24.00 MiB/s [2024-11-09T23:14:21.172Z] 6147.22 IOPS, 24.01 MiB/s [2024-11-09T23:14:21.173Z] 6146.20 IOPS, 24.01 MiB/s 00:40:54.972 Latency(us) 00:40:54.972 [2024-11-09T23:14:21.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.972 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:54.972 Verification LBA range: start 0x0 length 0x4000 00:40:54.972 NVMe0n1 : 10.12 6175.17 24.12 0.00 0.00 164952.00 27379.48 105634.32 00:40:54.972 [2024-11-09T23:14:21.173Z] =================================================================================================================== 00:40:54.972 [2024-11-09T23:14:21.173Z] Total : 6175.17 24.12 0.00 0.00 164952.00 27379.48 105634.32 00:40:54.972 { 00:40:54.972 "results": [ 00:40:54.972 { 00:40:54.972 "job": "NVMe0n1", 00:40:54.972 "core_mask": "0x1", 00:40:54.972 "workload": "verify", 00:40:54.972 "status": "finished", 00:40:54.972 "verify_range": { 00:40:54.972 "start": 0, 00:40:54.972 "length": 16384 00:40:54.972 }, 00:40:54.972 "queue_depth": 1024, 00:40:54.972 "io_size": 4096, 00:40:54.972 "runtime": 10.118917, 00:40:54.972 "iops": 6175.166769329168, 00:40:54.972 "mibps": 24.121745192692064, 00:40:54.972 "io_failed": 0, 00:40:54.972 "io_timeout": 0, 00:40:54.972 "avg_latency_us": 164952.00062705603, 00:40:54.972 "min_latency_us": 27379.484444444446, 00:40:54.972 "max_latency_us": 105634.32296296296 00:40:54.972 } 00:40:54.972 ], 00:40:54.972 "core_count": 1 00:40:54.972 } 00:40:54.972 00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3676060 00:40:54.972 00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3676060 ']' 00:40:54.972 00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3676060 00:40:54.972 00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:40:54.972 00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:54.972 00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3676060 00:40:54.972 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:54.972 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:54.972 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3676060' 00:40:54.972 killing process with pid 3676060 00:40:54.972 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3676060 00:40:54.972 Received shutdown signal, test time was about 10.000000 seconds 00:40:54.972 00:40:54.972 Latency(us) 00:40:54.972 [2024-11-09T23:14:21.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.972 [2024-11-09T23:14:21.173Z] =================================================================================================================== 00:40:54.972 [2024-11-09T23:14:21.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:54.972 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3676060 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:55.906 rmmod nvme_tcp 00:40:55.906 rmmod nvme_fabrics 00:40:55.906 rmmod nvme_keyring 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3675903 ']' 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3675903 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3675903 ']' 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3675903 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:55.906 00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3675903 00:40:55.906 00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:55.906 00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:55.906 00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3675903' 00:40:55.906 killing process with pid 3675903 00:40:55.906 00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3675903 00:40:55.906 00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3675903 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:57.280 00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.181 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:59.440 00:40:59.440 real 0m19.418s 00:40:59.440 user 0m26.862s 00:40:59.440 sys 0m3.745s 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:59.440 ************************************ 00:40:59.440 END TEST nvmf_queue_depth 00:40:59.440 ************************************ 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:59.440 ************************************ 00:40:59.440 START TEST nvmf_target_multipath 00:40:59.440 ************************************ 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:59.440 * Looking for test storage... 00:40:59.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.440 --rc genhtml_branch_coverage=1 00:40:59.440 --rc genhtml_function_coverage=1 00:40:59.440 --rc genhtml_legend=1 00:40:59.440 --rc geninfo_all_blocks=1 00:40:59.440 --rc geninfo_unexecuted_blocks=1 00:40:59.440 00:40:59.440 ' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.440 --rc genhtml_branch_coverage=1 00:40:59.440 --rc genhtml_function_coverage=1 00:40:59.440 --rc genhtml_legend=1 00:40:59.440 --rc geninfo_all_blocks=1 00:40:59.440 --rc geninfo_unexecuted_blocks=1 00:40:59.440 00:40:59.440 ' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.440 --rc genhtml_branch_coverage=1 00:40:59.440 --rc genhtml_function_coverage=1 00:40:59.440 --rc genhtml_legend=1 00:40:59.440 --rc geninfo_all_blocks=1 00:40:59.440 --rc geninfo_unexecuted_blocks=1 00:40:59.440 00:40:59.440 ' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:59.440 --rc genhtml_branch_coverage=1 00:40:59.440 --rc genhtml_function_coverage=1 00:40:59.440 --rc genhtml_legend=1 00:40:59.440 --rc geninfo_all_blocks=1 00:40:59.440 --rc geninfo_unexecuted_blocks=1 00:40:59.440 00:40:59.440 ' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:59.440 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:59.441 00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:01.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:01.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:01.974 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:01.975 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:01.975 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:01.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:01.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:41:01.975 00:41:01.975 --- 10.0.0.2 ping statistics --- 00:41:01.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.975 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:01.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:01.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:41:01.975 00:41:01.975 --- 10.0.0.1 ping statistics --- 00:41:01.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.975 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:01.975 only one NIC for nvmf test 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:01.975 rmmod nvme_tcp 00:41:01.975 rmmod nvme_fabrics 00:41:01.975 rmmod nvme_keyring 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.975 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:01.976 00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:03.880 00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.880 00:41:03.880 real 0m4.581s 00:41:03.880 user 0m0.939s 00:41:03.880 sys 0m1.646s 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:03.880 ************************************ 00:41:03.880 END TEST nvmf_target_multipath 00:41:03.880 ************************************ 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:03.880 ************************************ 00:41:03.880 START TEST nvmf_zcopy 00:41:03.880 ************************************ 00:41:03.880 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:04.139 * Looking for test storage... 00:41:04.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.139 --rc genhtml_branch_coverage=1 00:41:04.139 --rc genhtml_function_coverage=1 00:41:04.139 --rc genhtml_legend=1 00:41:04.139 --rc geninfo_all_blocks=1 00:41:04.139 --rc geninfo_unexecuted_blocks=1 00:41:04.139 00:41:04.139 ' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.139 --rc genhtml_branch_coverage=1 00:41:04.139 --rc genhtml_function_coverage=1 00:41:04.139 --rc genhtml_legend=1 00:41:04.139 --rc geninfo_all_blocks=1 00:41:04.139 --rc geninfo_unexecuted_blocks=1 00:41:04.139 00:41:04.139 ' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.139 --rc genhtml_branch_coverage=1 00:41:04.139 --rc genhtml_function_coverage=1 00:41:04.139 --rc genhtml_legend=1 00:41:04.139 --rc geninfo_all_blocks=1 00:41:04.139 --rc geninfo_unexecuted_blocks=1 00:41:04.139 00:41:04.139 ' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:04.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:04.139 --rc genhtml_branch_coverage=1 00:41:04.139 --rc genhtml_function_coverage=1 00:41:04.139 --rc genhtml_legend=1 00:41:04.139 --rc geninfo_all_blocks=1 00:41:04.139 --rc geninfo_unexecuted_blocks=1 00:41:04.139 00:41:04.139 ' 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.139 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:04.140 00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:06.039 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:06.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:06.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:06.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:06.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:06.040 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:06.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:41:06.298 00:41:06.298 --- 10.0.0.2 ping statistics --- 00:41:06.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.298 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:06.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:41:06.298 00:41:06.298 --- 10.0.0.1 ping statistics --- 00:41:06.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.298 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3681495 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3681495 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3681495 ']' 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:06.298 00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.298 [2024-11-10 00:14:32.477669] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:06.298 [2024-11-10 00:14:32.480106] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:41:06.298 [2024-11-10 00:14:32.480198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:06.556 [2024-11-10 00:14:32.618069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.556 [2024-11-10 00:14:32.731949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:06.556 [2024-11-10 00:14:32.732011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:06.556 [2024-11-10 00:14:32.732034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:06.556 [2024-11-10 00:14:32.732051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:06.556 [2024-11-10 00:14:32.732069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:06.556 [2024-11-10 00:14:32.733453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.123 [2024-11-10 00:14:33.070371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:07.123 [2024-11-10 00:14:33.070822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.381 [2024-11-10 00:14:33.522449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.381 [2024-11-10 00:14:33.538761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.381 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.638 malloc0 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.639 { 00:41:07.639 "params": { 00:41:07.639 "name": "Nvme$subsystem", 00:41:07.639 "trtype": "$TEST_TRANSPORT", 00:41:07.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.639 "adrfam": "ipv4", 00:41:07.639 "trsvcid": "$NVMF_PORT", 00:41:07.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.639 "hdgst": ${hdgst:-false}, 00:41:07.639 "ddgst": ${ddgst:-false} 00:41:07.639 }, 00:41:07.639 "method": "bdev_nvme_attach_controller" 00:41:07.639 } 00:41:07.639 EOF 00:41:07.639 )") 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:07.639 00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.639 "params": { 00:41:07.639 "name": "Nvme1", 00:41:07.639 "trtype": "tcp", 00:41:07.639 "traddr": "10.0.0.2", 00:41:07.639 "adrfam": "ipv4", 00:41:07.639 "trsvcid": "4420", 00:41:07.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.639 "hdgst": false, 00:41:07.639 "ddgst": false 00:41:07.639 }, 00:41:07.639 "method": "bdev_nvme_attach_controller" 00:41:07.639 }' 00:41:07.639 [2024-11-10 00:14:33.699625] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:41:07.639 [2024-11-10 00:14:33.699774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681644 ] 00:41:07.896 [2024-11-10 00:14:33.856567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.896 [2024-11-10 00:14:34.014612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.827 Running I/O for 10 seconds... 00:41:10.695 4210.00 IOPS, 32.89 MiB/s [2024-11-09T23:14:37.830Z] 4272.00 IOPS, 33.38 MiB/s [2024-11-09T23:14:38.764Z] 4250.33 IOPS, 33.21 MiB/s [2024-11-09T23:14:39.703Z] 4240.25 IOPS, 33.13 MiB/s [2024-11-09T23:14:41.080Z] 4234.60 IOPS, 33.08 MiB/s [2024-11-09T23:14:42.011Z] 4240.83 IOPS, 33.13 MiB/s [2024-11-09T23:14:42.943Z] 4234.00 IOPS, 33.08 MiB/s [2024-11-09T23:14:43.877Z] 4240.50 IOPS, 33.13 MiB/s [2024-11-09T23:14:44.810Z] 4237.11 IOPS, 33.10 MiB/s [2024-11-09T23:14:44.810Z] 4234.20 IOPS, 33.08 MiB/s 00:41:18.609 Latency(us) 00:41:18.609 [2024-11-09T23:14:44.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.609 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:18.609 Verification LBA range: start 0x0 length 0x1000 00:41:18.609 Nvme1n1 : 10.02 4236.63 33.10 0.00 0.00 30127.95 5194.33 42719.76 00:41:18.609 [2024-11-09T23:14:44.810Z] =================================================================================================================== 00:41:18.609 [2024-11-09T23:14:44.810Z] Total : 4236.63 33.10 0.00 0.00 30127.95 5194.33 42719.76 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3682957 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:19.544 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:19.544 { 00:41:19.544 "params": { 00:41:19.544 "name": "Nvme$subsystem", 00:41:19.544 "trtype": "$TEST_TRANSPORT", 00:41:19.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.544 "adrfam": "ipv4", 00:41:19.544 "trsvcid": "$NVMF_PORT", 00:41:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.544 "hdgst": ${hdgst:-false}, 00:41:19.544 "ddgst": ${ddgst:-false} 00:41:19.544 }, 00:41:19.544 "method": "bdev_nvme_attach_controller" 00:41:19.544 } 00:41:19.545 EOF 00:41:19.545 )") 00:41:19.545 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:19.545 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:19.545 [2024-11-10 00:14:45.598352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.598409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:19.545 00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:19.545 "params": { 00:41:19.545 "name": "Nvme1", 00:41:19.545 "trtype": "tcp", 00:41:19.545 "traddr": "10.0.0.2", 00:41:19.545 "adrfam": "ipv4", 00:41:19.545 "trsvcid": "4420", 00:41:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:19.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:19.545 "hdgst": false, 00:41:19.545 "ddgst": false 00:41:19.545 }, 00:41:19.545 "method": "bdev_nvme_attach_controller" 00:41:19.545 }' 00:41:19.545 [2024-11-10 00:14:45.606264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.606299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.614219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.614249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.622228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.622256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.630241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.630270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.638217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.638250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.646215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.646241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.654212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.654238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.662202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.662227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.670218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.670245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.677205] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:41:19.545 [2024-11-10 00:14:45.677320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682957 ] 00:41:19.545 [2024-11-10 00:14:45.678204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.678230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.686219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.686246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.694217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.694250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.702204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.702231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.710226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.710252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.718212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.718237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.726218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.726243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.734213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.734240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.545 [2024-11-10 00:14:45.742209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.545 [2024-11-10 00:14:45.742236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.750218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.750245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.758224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.758250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.766237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.766270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.774246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.774278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.782246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.782278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.790234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.790266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.798246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.798278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.806237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.806269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.814245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.814276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.822263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.822294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.829211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.804 [2024-11-10 00:14:45.830233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.830265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.838246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.838278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.846283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.846323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.854277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.854321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.862248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.862281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.870228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.870259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.878256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.878288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.886246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.886277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.894233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.894264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.902253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.902285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.910246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.910277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.804 [2024-11-10 00:14:45.918248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.804 [2024-11-10 00:14:45.918279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.926243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.926274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.934240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.934270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.942244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.942275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.950243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.950274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.958225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.958257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.966251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.966282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.968631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.805 [2024-11-10 00:14:45.974250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.974282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.982232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.982264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.990312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.990365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.805 [2024-11-10 00:14:45.998282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.805 [2024-11-10 00:14:45.998322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.006261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.006295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.014269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.014303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.022231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.022264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.030245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.030278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.038244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.038276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.046230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.046262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.054255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.054287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.062289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.062334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.070303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.070349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.078319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.078366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.086304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.086352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.094268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.094301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.102244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.102288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.110250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.110292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.118259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.118292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.126235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.126266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.134245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.134277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.142250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.142282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.150236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.150267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.158250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.158293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.166256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.166287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.174235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.174267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.182245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.064 [2024-11-10 00:14:46.182276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.064 [2024-11-10 00:14:46.190231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.190262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.198256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.198287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.206262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.206293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.214232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.214264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.222313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.222357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.230303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.230348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.238290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.238336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.246249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.246281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.254233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.254265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.065 [2024-11-10 00:14:46.262250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.065 [2024-11-10 00:14:46.262283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.270254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.270286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.278236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.278268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.286246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.286277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.294246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.294277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.302255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.302287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.310246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.310277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.318226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.323 [2024-11-10 00:14:46.318256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.323 [2024-11-10 00:14:46.326249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.326281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.334243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.334274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.342265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.342298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.350258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.350294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.358257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.358294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.366268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.366304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.374254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.374290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.382245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.382281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.390268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.390302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.398246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.398279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.406237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.406269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.414248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.414281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.422253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.422288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.430236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.430272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.438254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.438288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.446239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.446272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.454248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.454280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.462245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.462277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.470232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.470263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.478254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.478289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.486274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.486307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.494234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.494265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.502250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.502282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.510233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.510265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.324 [2024-11-10 00:14:46.518255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.324 [2024-11-10 00:14:46.518290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.526256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.526290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.534229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.534262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.542246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.542279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.550249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.550281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.558240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.558273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.566248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.566281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.594248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.594288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.602249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.602283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 Running I/O for 5 seconds... 00:41:20.581 [2024-11-10 00:14:46.620128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.620176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.634240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.634273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.650898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.650947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.666127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.666167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.681856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.681891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.696640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.696674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.581 [2024-11-10 00:14:46.711241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.581 [2024-11-10 00:14:46.711274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.582 [2024-11-10 00:14:46.726233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.582 [2024-11-10 00:14:46.726270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.582 [2024-11-10 00:14:46.740400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.582 [2024-11-10 00:14:46.740432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.582 [2024-11-10 00:14:46.754955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.582 [2024-11-10 00:14:46.754996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.582 [2024-11-10 00:14:46.770310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.582 [2024-11-10 00:14:46.770349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.785852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.785901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.801547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.801597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.817662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.817695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.833439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.833479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.848925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.848966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.864159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.864198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.879155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.879186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.893964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.893996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.908364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.908411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.922537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.922575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.937956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.937994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.952347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.952385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.966848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.966900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.981597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.981648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:46.996530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:46.996563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:47.011198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:47.011237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:20.839 [2024-11-10 00:14:47.025206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:20.839 [2024-11-10 00:14:47.025245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.041344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.041384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.055856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.055896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.071249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.071288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.085658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.085693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.101409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.101440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.116338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.116370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.131299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.131331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.145595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.145657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.160331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.160369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.176153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.176192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.191070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.191118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.206029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.206067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.221851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.221906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.236329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.236368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.251985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.252024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.267422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.267461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.282916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.282973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.097 [2024-11-10 00:14:47.297093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.097 [2024-11-10 00:14:47.297124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.313425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.313464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.328370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.328409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.341848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.341896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.357980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.358014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.373471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.373502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.388086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.388119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.402923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.402970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.417618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.417668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.432066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.432105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.446804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.446836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.461645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.461680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.476014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.476061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.490139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.490170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.505739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.505776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.520186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.520219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.534433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.534471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.355 [2024-11-10 00:14:47.548391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.355 [2024-11-10 00:14:47.548430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.563922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.563974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.578316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.578355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.592986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.593025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.607430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.607469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 8438.00 IOPS, 65.92 MiB/s [2024-11-09T23:14:47.847Z] [2024-11-10 00:14:47.622066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.622102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.637737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.637770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.652888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.652921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.668020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.668059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.683543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.683581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.697972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.698010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.711416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.646 [2024-11-10 00:14:47.711455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.646 [2024-11-10 00:14:47.727484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.727516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.647 [2024-11-10 00:14:47.742787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.742820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.647 [2024-11-10 00:14:47.761333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.761371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.647 [2024-11-10 00:14:47.774174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.774213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.647 [2024-11-10 00:14:47.790186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.790225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.647 [2024-11-10 00:14:47.804659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.804693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.647 [2024-11-10 00:14:47.819534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.647 [2024-11-10 00:14:47.819571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.835297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.835330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.850027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.850061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.865555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.865614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.880789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.880824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.895839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.895886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.910155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.910187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.925409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.925447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.940196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.940231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.954959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.954990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.968980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.969019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.983943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.983982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:47.997762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:47.997797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.013685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.013719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.027504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.027543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.043196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.043235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.057901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.057950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.073084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.073124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.086146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.086186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.102686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.102719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:21.941 [2024-11-10 00:14:48.117162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:21.941 [2024-11-10 00:14:48.117202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.131751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.131788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.147152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.147187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.160939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.160978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.177025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.177064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.191691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.191724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.205305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.205344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.220705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.220738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.235302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.235340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.254419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.254458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.267389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.267428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.284077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.284110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.298704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.298737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.313909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.313965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.328636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.328671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.344018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.344057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.358685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.358724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.374002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.374034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.201 [2024-11-10 00:14:48.388760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.201 [2024-11-10 00:14:48.388794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.403810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.403844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.418486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.418518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.434161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.434201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.449192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.449232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.464311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.464349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.479727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.479762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.494749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.494782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.509348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.509380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.524026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.524065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.538434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.538472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.553129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.553184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.567061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.567100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.583256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.583295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.598358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.598400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.613078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.613110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 8504.50 IOPS, 66.44 MiB/s [2024-11-09T23:14:48.660Z] [2024-11-10 00:14:48.627451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.627490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.642112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.642151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.459 [2024-11-10 00:14:48.656933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.459 [2024-11-10 00:14:48.656966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.671674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.671708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.686893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.686932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.702336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.702375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.717945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.717977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.733239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.733278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.747829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.747861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.762978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.763017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.777979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.778017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.793305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.793344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.808139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.808172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.822099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.822138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.837074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.837108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.852182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.852214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.866484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.866524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.881148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.881195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.896071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.896110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.717 [2024-11-10 00:14:48.911018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.717 [2024-11-10 00:14:48.911057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:48.925535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:48.925582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:48.939131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:48.939162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:48.958667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:48.958699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:48.971790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:48.971823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:48.987970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:48.988009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.002665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.002700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.017181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.017221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.031949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.031988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.046403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.046438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.062120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.062159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.077699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.077735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.092383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.092417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.106448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.106489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.121270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.121310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.137066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.137099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.152539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.152594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:22.979 [2024-11-10 00:14:49.167814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:22.979 [2024-11-10 00:14:49.167848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.182297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.182329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.196163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.196202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.212222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.212262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.227219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.227258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.242400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.242434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.257744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.257778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.272803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.272838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.287567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.287616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.303161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.303200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.320329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.320368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.333627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.333666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.349669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.349702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.363718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.363751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.379185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.379223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.395610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.395645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.407794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.407826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.424020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.424058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.238 [2024-11-10 00:14:49.438834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.238 [2024-11-10 00:14:49.438887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.455351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.455386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.468624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.468661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.485074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.485107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.500322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.500361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.514500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.514539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.531642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.531677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.546758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.546790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.561757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.561791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.577378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.577417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.592069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.592107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.606509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.606547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 8516.00 IOPS, 66.53 MiB/s [2024-11-09T23:14:49.698Z] [2024-11-10 00:14:49.621801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.621835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.636477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.636516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.651087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.651126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.666071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.666106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.681259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.681298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.497 [2024-11-10 00:14:49.696501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.497 [2024-11-10 00:14:49.696539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.711024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.711063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.725986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.726024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.741406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.741445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.757039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.757078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.771738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.771770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.786244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.786283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.800480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.800520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.814794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.814827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.835010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.835049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.847114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.847153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.863251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.863290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.877962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.878001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.892936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.755 [2024-11-10 00:14:49.892974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.755 [2024-11-10 00:14:49.907873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.756 [2024-11-10 00:14:49.907911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.756 [2024-11-10 00:14:49.922263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.756 [2024-11-10 00:14:49.922302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.756 [2024-11-10 00:14:49.937075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.756 [2024-11-10 00:14:49.937114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:23.756 [2024-11-10 00:14:49.951546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:23.756 [2024-11-10 00:14:49.951594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:49.965995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:49.966033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:49.981912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:49.981951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:49.996707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:49.996741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.011669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.011717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.027892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.027926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.043233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.043272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.058321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.058361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.073986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.074025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.089439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.089478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.104961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.105001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.120329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.120362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.135389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.135429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.150820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.150853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.166443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.166476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.181403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.181436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.196385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.196425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.014 [2024-11-10 00:14:50.211357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.014 [2024-11-10 00:14:50.211391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.226484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.226523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.241933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.241986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.257456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.257496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.272757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.272790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.287765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.287800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.302941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.302981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.318006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.318045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.332666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.332717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.347311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.347350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.362857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.273 [2024-11-10 00:14:50.362908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.273 [2024-11-10 00:14:50.379723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.379775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.274 [2024-11-10 00:14:50.392728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.392761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.274 [2024-11-10 00:14:50.408430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.408462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.274 [2024-11-10 00:14:50.421698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.421731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.274 [2024-11-10 00:14:50.437366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.437404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.274 [2024-11-10 00:14:50.451996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.452027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.274 [2024-11-10 00:14:50.466302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.274 [2024-11-10 00:14:50.466334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.481887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.481921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.496926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.496965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.511756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.511789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.525879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.525911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.540736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.540771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.555790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.555825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.570745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.570778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.584025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.584073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.600196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.600230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.614756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.614795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 8508.00 IOPS, 66.47 MiB/s [2024-11-09T23:14:50.733Z] [2024-11-10 00:14:50.629487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.629526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.644497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.644531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.659351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.659383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.674291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.674323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.689103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.689135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.704109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.704142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.532 [2024-11-10 00:14:50.719090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.532 [2024-11-10 00:14:50.719129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.734454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.734493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.749552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.749602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.764293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.764332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.779306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.779344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.794099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.794131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.808745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.808779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.823223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.823262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.842326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.842358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.855726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.855761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.872173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.872212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.887420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.887459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.904282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.904316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.917035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.917075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.933108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.933147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.947787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.947821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.962049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.962083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.976663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.976701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:24.791 [2024-11-10 00:14:50.991610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:24.791 [2024-11-10 00:14:50.991663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.006444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.006476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.021534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.021581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.036217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.036256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.051049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.051088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.065645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.065679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.080092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.080130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.094687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.094734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.109598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.109644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.125232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.125271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.139907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.139954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.154687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.154721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.170024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.170063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.184501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.184533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.198671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.198704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.214187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.214226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.228539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.228578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.050 [2024-11-10 00:14:51.243737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.050 [2024-11-10 00:14:51.243770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.258256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.258296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.273969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.274009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.289123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.289155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.303404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.303438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.318145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.318182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.334041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.334084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.349131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.349172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.365166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.365202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.379752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.379788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.393852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.393888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.408758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.408794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.308 [2024-11-10 00:14:51.422835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.308 [2024-11-10 00:14:51.422888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.309 [2024-11-10 00:14:51.441257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.309 [2024-11-10 00:14:51.441292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.309 [2024-11-10 00:14:51.453743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.309 [2024-11-10 00:14:51.453778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.309 [2024-11-10 00:14:51.469742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.309 [2024-11-10 00:14:51.469777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.309 [2024-11-10 00:14:51.483941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.309 [2024-11-10 00:14:51.483973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.309 [2024-11-10 00:14:51.497674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.309 [2024-11-10 00:14:51.497710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.511897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.511933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.526335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.526368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.540312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.540345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.554618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.554669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.568851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.568914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.583158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.583191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.599501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.599537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.612178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.612212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 8539.40 IOPS, 66.71 MiB/s [2024-11-09T23:14:51.768Z] [2024-11-10 00:14:51.627281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.627316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.634307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.634339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 00:41:25.567 Latency(us) 00:41:25.567 [2024-11-09T23:14:51.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.567 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:25.567 Nvme1n1 : 5.01 8541.01 66.73 0.00 0.00 14960.52 3786.52 24272.59 00:41:25.567 [2024-11-09T23:14:51.768Z] =================================================================================================================== 00:41:25.567 [2024-11-09T23:14:51.768Z] Total : 8541.01 66.73 0.00 0.00 14960.52 3786.52 24272.59 00:41:25.567 [2024-11-10 00:14:51.642227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.642263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.650213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.650243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.658224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.658252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.666220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.666247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.674222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.674249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.682324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.682372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.690340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.690391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.698347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.698399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.706237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.706264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.714228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.714255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.722215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.722241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.730218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.730245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.738206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.567 [2024-11-10 00:14:51.738234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.567 [2024-11-10 00:14:51.746221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.568 [2024-11-10 00:14:51.746249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.568 [2024-11-10 00:14:51.754203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.568 [2024-11-10 00:14:51.754229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.568 [2024-11-10 00:14:51.762216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.568 [2024-11-10 00:14:51.762243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.770328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.770388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.778361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.778418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.786361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.786410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.794217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.794251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.802232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.802259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.810240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.810267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.818201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.818227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.826228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.826254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.834222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.834249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.842201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.842227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.850227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.850253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.858221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.858247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.866204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.866231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.874215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.874241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.886206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.886233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.894220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.894246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.902217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.826 [2024-11-10 00:14:51.902242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.826 [2024-11-10 00:14:51.910221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.910246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.918221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.918247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.926358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.926410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.934326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.934378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.942245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.942271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.950208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.950241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.958216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.958243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.966216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.966241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.974201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.974226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.982289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.982328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.990365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.990421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:51.998348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:51.998403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:52.006378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:52.006430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:52.014216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:52.014243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:25.827 [2024-11-10 00:14:52.022214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:25.827 [2024-11-10 00:14:52.022241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.085 [2024-11-10 00:14:52.030246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.085 [2024-11-10 00:14:52.030277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.085 [2024-11-10 00:14:52.038204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.085 [2024-11-10 00:14:52.038232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.085 [2024-11-10 00:14:52.046220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.085 [2024-11-10 00:14:52.046246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.085 [2024-11-10 00:14:52.054223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.085 [2024-11-10 00:14:52.054250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.085 [2024-11-10 00:14:52.062206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.085 [2024-11-10 00:14:52.062232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.085 [2024-11-10 00:14:52.070220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.085 [2024-11-10 00:14:52.070246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.078199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.078225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.086215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.086242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.094217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.094243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.102221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.102247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.110215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.110240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.118214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.118240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.126199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.126225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.134223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.134251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.142205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.142232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.150335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.150384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.158345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.158399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.166392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.166418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.174217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.174242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.182222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.182247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.190200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.190226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.198233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.198260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.206207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.206232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.214218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.214245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.222228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.222255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.230205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.230230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.238219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.238246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.246216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.246242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.254209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.254250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.262255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.262285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.270337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.270391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.278240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.278266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.086 [2024-11-10 00:14:52.286229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.086 [2024-11-10 00:14:52.286257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.350 [2024-11-10 00:14:52.294226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.350 [2024-11-10 00:14:52.294253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.350 [2024-11-10 00:14:52.302215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.350 [2024-11-10 00:14:52.302241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.350 [2024-11-10 00:14:52.310217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.350 [2024-11-10 00:14:52.310243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.350 [2024-11-10 00:14:52.318204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.350 [2024-11-10 00:14:52.318230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.350 [2024-11-10 00:14:52.326215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.350 [2024-11-10 00:14:52.326241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.350 [2024-11-10 00:14:52.334202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.350 [2024-11-10 00:14:52.334229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.342217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.342243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.350275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.350313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.358340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.358388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.366237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.366264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.374220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.374246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.382205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.382231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.390232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.390258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.398201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.398227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.406219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.406246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.414216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.414242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.422202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.422227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.430214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.430240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.438231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.438259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.446235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.446267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.454250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.454278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.462211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.462239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 [2024-11-10 00:14:52.470258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:26.351 [2024-11-10 00:14:52.470284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:26.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3682957) - No such process 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3682957 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.351 delay0 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.351 00:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:26.614 [2024-11-10 00:14:52.644028] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:34.721 Initializing NVMe Controllers 00:41:34.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:34.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:34.721 Initialization complete. Launching workers. 00:41:34.721 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 227, failed: 18154 00:41:34.721 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18239, failed to submit 142 00:41:34.721 success 18168, unsuccessful 71, failed 0 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:34.721 rmmod nvme_tcp 00:41:34.721 rmmod nvme_fabrics 00:41:34.721 rmmod nvme_keyring 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3681495 ']' 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3681495 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3681495 ']' 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3681495 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3681495 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3681495' 00:41:34.721 killing process with pid 3681495 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3681495 00:41:34.721 00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3681495 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.979 00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.513 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:37.513 00:41:37.513 real 0m33.061s 00:41:37.513 user 0m47.748s 00:41:37.513 sys 0m10.028s 00:41:37.513 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:37.513 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:37.513 ************************************ 00:41:37.513 END TEST nvmf_zcopy 00:41:37.513 ************************************ 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:37.514 ************************************ 00:41:37.514 START TEST nvmf_nmic 00:41:37.514 ************************************ 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:37.514 * Looking for test storage... 00:41:37.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.514 --rc genhtml_branch_coverage=1 00:41:37.514 --rc genhtml_function_coverage=1 00:41:37.514 --rc genhtml_legend=1 00:41:37.514 --rc geninfo_all_blocks=1 00:41:37.514 --rc geninfo_unexecuted_blocks=1 00:41:37.514 00:41:37.514 ' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.514 --rc genhtml_branch_coverage=1 00:41:37.514 --rc genhtml_function_coverage=1 00:41:37.514 --rc genhtml_legend=1 00:41:37.514 --rc geninfo_all_blocks=1 00:41:37.514 --rc geninfo_unexecuted_blocks=1 00:41:37.514 00:41:37.514 ' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.514 --rc genhtml_branch_coverage=1 00:41:37.514 --rc genhtml_function_coverage=1 00:41:37.514 --rc genhtml_legend=1 00:41:37.514 --rc geninfo_all_blocks=1 00:41:37.514 --rc geninfo_unexecuted_blocks=1 00:41:37.514 00:41:37.514 ' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:37.514 --rc genhtml_branch_coverage=1 00:41:37.514 --rc genhtml_function_coverage=1 00:41:37.514 --rc genhtml_legend=1 00:41:37.514 --rc geninfo_all_blocks=1 00:41:37.514 --rc geninfo_unexecuted_blocks=1 00:41:37.514 00:41:37.514 ' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.514 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:37.515 00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:39.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:39.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:39.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:39.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:39.418 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:39.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:39.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:41:39.419 00:41:39.419 --- 10.0.0.2 ping statistics --- 00:41:39.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.419 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:39.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:39.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:41:39.419 00:41:39.419 --- 10.0.0.1 ping statistics --- 00:41:39.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.419 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3686815 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3686815 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3686815 ']' 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:39.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:39.419 00:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.677 [2024-11-10 00:15:05.647892] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:39.677 [2024-11-10 00:15:05.650648] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:41:39.677 [2024-11-10 00:15:05.650767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:39.677 [2024-11-10 00:15:05.807858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:39.935 [2024-11-10 00:15:05.950097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:39.935 [2024-11-10 00:15:05.950169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:39.935 [2024-11-10 00:15:05.950198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:39.935 [2024-11-10 00:15:05.950237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:39.935 [2024-11-10 00:15:05.950256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:39.936 [2024-11-10 00:15:05.953073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:39.936 [2024-11-10 00:15:05.953145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:39.936 [2024-11-10 00:15:05.953239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.936 [2024-11-10 00:15:05.953250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:40.195 [2024-11-10 00:15:06.325225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:40.195 [2024-11-10 00:15:06.337882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:40.195 [2024-11-10 00:15:06.338051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:40.195 [2024-11-10 00:15:06.338848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:40.195 [2024-11-10 00:15:06.339202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:40.453 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:40.453 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:41:40.453 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:40.453 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:40.454 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.454 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:40.454 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.712 [2024-11-10 00:15:06.658304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.712 Malloc0 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.712 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 [2024-11-10 00:15:06.774497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:40.713 test case1: single bdev can't be used in multiple subsystems 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 [2024-11-10 00:15:06.798188] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:40.713 [2024-11-10 00:15:06.798250] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:40.713 [2024-11-10 00:15:06.798276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.713 request: 00:41:40.713 { 00:41:40.713 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:40.713 "namespace": { 00:41:40.713 "bdev_name": "Malloc0", 00:41:40.713 "no_auto_visible": false 00:41:40.713 }, 00:41:40.713 "method": "nvmf_subsystem_add_ns", 00:41:40.713 "req_id": 1 00:41:40.713 } 00:41:40.713 Got JSON-RPC error response 00:41:40.713 response: 00:41:40.713 { 00:41:40.713 "code": -32602, 00:41:40.713 "message": "Invalid parameters" 00:41:40.713 } 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:40.713 Adding namespace failed - expected result. 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:40.713 test case2: host connect to nvmf target in multiple paths 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:40.713 [2024-11-10 00:15:06.806281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:40.713 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:40.976 00:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:41.235 00:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:41.235 00:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:41:41.235 00:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:41:41.235 00:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:41:41.235 00:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:41:43.133 00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:43.133 [global] 00:41:43.133 thread=1 00:41:43.133 invalidate=1 00:41:43.133 rw=write 00:41:43.133 time_based=1 00:41:43.133 runtime=1 00:41:43.134 ioengine=libaio 00:41:43.134 direct=1 00:41:43.134 bs=4096 00:41:43.134 iodepth=1 00:41:43.134 norandommap=0 00:41:43.134 numjobs=1 00:41:43.134 00:41:43.134 verify_dump=1 00:41:43.134 verify_backlog=512 00:41:43.134 verify_state_save=0 00:41:43.134 do_verify=1 00:41:43.134 verify=crc32c-intel 00:41:43.134 [job0] 00:41:43.134 filename=/dev/nvme0n1 00:41:43.134 Could not set queue depth (nvme0n1) 00:41:43.391 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:43.391 fio-3.35 00:41:43.391 Starting 1 thread 00:41:44.773 00:41:44.773 job0: (groupid=0, jobs=1): err= 0: pid=3687849: Sun Nov 10 00:15:10 2024 00:41:44.773 read: IOPS=831, BW=3327KiB/s (3407kB/s)(3360KiB/1010msec) 00:41:44.773 slat (nsec): min=4616, max=40095, avg=10239.43, stdev=5131.32 00:41:44.773 clat (usec): min=239, max=41037, avg=909.22, stdev=4824.04 00:41:44.773 lat (usec): min=244, max=41055, avg=919.46, stdev=4825.91 00:41:44.773 clat percentiles (usec): 00:41:44.773 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 265], 00:41:44.773 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 306], 00:41:44.773 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 494], 95.00th=[ 510], 00:41:44.773 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:44.773 | 99.99th=[41157] 00:41:44.773 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:41:44.773 slat (usec): min=6, max=28909, avg=35.53, stdev=903.20 00:41:44.773 clat (usec): min=160, max=469, avg=188.26, stdev=20.30 00:41:44.773 lat (usec): min=167, max=29293, avg=223.78, stdev=909.53 00:41:44.773 clat percentiles (usec): 00:41:44.773 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 178], 00:41:44.773 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:41:44.773 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 215], 00:41:44.773 | 99.00th=[ 247], 99.50th=[ 273], 99.90th=[ 457], 99.95th=[ 469], 00:41:44.773 | 99.99th=[ 469] 00:41:44.773 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:41:44.773 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:44.773 lat (usec) : 250=56.22%, 500=39.81%, 750=3.33% 00:41:44.773 lat (msec) : 50=0.64% 00:41:44.773 cpu : usr=0.79%, sys=1.59%, ctx=1867, majf=0, minf=1 00:41:44.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:44.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.773 issued rwts: total=840,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:44.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:44.773 00:41:44.773 Run status group 0 (all jobs): 00:41:44.773 READ: bw=3327KiB/s (3407kB/s), 3327KiB/s-3327KiB/s (3407kB/s-3407kB/s), io=3360KiB (3441kB), run=1010-1010msec 00:41:44.773 WRITE: bw=4055KiB/s (4153kB/s), 4055KiB/s-4055KiB/s (4153kB/s-4153kB/s), io=4096KiB (4194kB), run=1010-1010msec 00:41:44.773 00:41:44.773 Disk stats (read/write): 00:41:44.773 nvme0n1: ios=862/1024, merge=0/0, ticks=1595/192, in_queue=1787, util=98.60% 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:44.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:44.773 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:44.774 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:44.774 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:44.774 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:44.774 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:44.774 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:44.774 rmmod nvme_tcp 00:41:44.774 rmmod nvme_fabrics 00:41:44.774 rmmod nvme_keyring 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3686815 ']' 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3686815 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3686815 ']' 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3686815 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:45.032 00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3686815 00:41:45.032 00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:45.032 00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:45.032 00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3686815' 00:41:45.032 killing process with pid 3686815 00:41:45.032 00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3686815 00:41:45.032 00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3686815 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:46.418 00:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:48.322 00:41:48.322 real 0m11.244s 00:41:48.322 user 0m19.219s 00:41:48.322 sys 0m3.632s 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:48.322 ************************************ 00:41:48.322 END TEST nvmf_nmic 00:41:48.322 ************************************ 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:48.322 ************************************ 00:41:48.322 START TEST nvmf_fio_target 00:41:48.322 ************************************ 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:48.322 * Looking for test storage... 00:41:48.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:41:48.322 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:48.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.588 --rc genhtml_branch_coverage=1 00:41:48.588 --rc genhtml_function_coverage=1 00:41:48.588 --rc genhtml_legend=1 00:41:48.588 --rc geninfo_all_blocks=1 00:41:48.588 --rc geninfo_unexecuted_blocks=1 00:41:48.588 00:41:48.588 ' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:48.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.588 --rc genhtml_branch_coverage=1 00:41:48.588 --rc genhtml_function_coverage=1 00:41:48.588 --rc genhtml_legend=1 00:41:48.588 --rc geninfo_all_blocks=1 00:41:48.588 --rc geninfo_unexecuted_blocks=1 00:41:48.588 00:41:48.588 ' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:48.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.588 --rc genhtml_branch_coverage=1 00:41:48.588 --rc genhtml_function_coverage=1 00:41:48.588 --rc genhtml_legend=1 00:41:48.588 --rc geninfo_all_blocks=1 00:41:48.588 --rc geninfo_unexecuted_blocks=1 00:41:48.588 00:41:48.588 ' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:48.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.588 --rc genhtml_branch_coverage=1 00:41:48.588 --rc genhtml_function_coverage=1 00:41:48.588 --rc genhtml_legend=1 00:41:48.588 --rc geninfo_all_blocks=1 00:41:48.588 --rc geninfo_unexecuted_blocks=1 00:41:48.588 00:41:48.588 ' 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:48.588 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:48.589 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:50.491 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:50.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:50.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:50.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:50.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:50.492 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:50.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:50.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:41:50.751 00:41:50.751 --- 10.0.0.2 ping statistics --- 00:41:50.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.751 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:50.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:50.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:41:50.751 00:41:50.751 --- 10.0.0.1 ping statistics --- 00:41:50.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:50.751 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3690058 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3690058 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3690058 ']' 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:50.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:50.751 00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:50.751 [2024-11-10 00:15:16.867193] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:50.751 [2024-11-10 00:15:16.869958] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:41:50.751 [2024-11-10 00:15:16.870078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:51.010 [2024-11-10 00:15:17.017955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:51.010 [2024-11-10 00:15:17.152056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:51.010 [2024-11-10 00:15:17.152132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:51.011 [2024-11-10 00:15:17.152161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:51.011 [2024-11-10 00:15:17.152183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:51.011 [2024-11-10 00:15:17.152206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:51.011 [2024-11-10 00:15:17.155094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:51.011 [2024-11-10 00:15:17.155165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:51.011 [2024-11-10 00:15:17.155254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:51.011 [2024-11-10 00:15:17.155264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:51.578 [2024-11-10 00:15:17.530764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:51.578 [2024-11-10 00:15:17.540935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:51.578 [2024-11-10 00:15:17.541096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:51.578 [2024-11-10 00:15:17.541938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:51.578 [2024-11-10 00:15:17.542290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:51.837 00:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:52.096 [2024-11-10 00:15:18.192414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:52.096 00:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:52.662 00:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:52.662 00:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:52.926 00:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:52.926 00:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.185 00:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:53.185 00:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:53.442 00:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:53.442 00:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:53.705 00:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.271 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:54.271 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.529 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:54.529 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:54.787 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:54.787 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:55.353 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:55.612 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:55.612 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:55.869 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:55.869 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:56.127 00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:56.386 [2024-11-10 00:15:22.392509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:56.386 00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:56.643 00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:56.902 00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:57.159 00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:57.159 00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:41:57.159 00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:41:57.159 00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:41:57.159 00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:41:57.159 00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:41:59.089 00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:59.089 [global] 00:41:59.089 thread=1 00:41:59.089 invalidate=1 00:41:59.089 rw=write 00:41:59.089 time_based=1 00:41:59.089 runtime=1 00:41:59.089 ioengine=libaio 00:41:59.089 direct=1 00:41:59.089 bs=4096 00:41:59.089 iodepth=1 00:41:59.089 norandommap=0 00:41:59.089 numjobs=1 00:41:59.089 00:41:59.089 verify_dump=1 00:41:59.089 verify_backlog=512 00:41:59.089 verify_state_save=0 00:41:59.089 do_verify=1 00:41:59.089 verify=crc32c-intel 00:41:59.089 [job0] 00:41:59.089 filename=/dev/nvme0n1 00:41:59.089 [job1] 00:41:59.089 filename=/dev/nvme0n2 00:41:59.089 [job2] 00:41:59.089 filename=/dev/nvme0n3 00:41:59.089 [job3] 00:41:59.089 filename=/dev/nvme0n4 00:41:59.089 Could not set queue depth (nvme0n1) 00:41:59.089 Could not set queue depth (nvme0n2) 00:41:59.089 Could not set queue depth (nvme0n3) 00:41:59.089 Could not set queue depth (nvme0n4) 00:41:59.378 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.378 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.378 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.378 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:59.378 fio-3.35 00:41:59.378 Starting 4 threads 00:42:00.753 00:42:00.753 job0: (groupid=0, jobs=1): err= 0: pid=3691246: Sun Nov 10 00:15:26 2024 00:42:00.753 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:42:00.753 slat (nsec): min=7356, max=15667, avg=13343.73, stdev=1489.12 00:42:00.753 clat (usec): min=40648, max=41112, avg=40968.02, stdev=83.19 00:42:00.753 lat (usec): min=40655, max=41125, avg=40981.36, stdev=84.27 00:42:00.753 clat percentiles (usec): 00:42:00.753 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:00.753 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:00.753 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:00.753 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:00.753 | 99.99th=[41157] 00:42:00.753 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:42:00.753 slat (nsec): min=6334, max=27925, avg=7973.86, stdev=2625.15 00:42:00.753 clat (usec): min=186, max=2962, avg=249.26, stdev=121.96 00:42:00.753 lat (usec): min=193, max=2969, avg=257.23, stdev=122.05 00:42:00.753 clat percentiles (usec): 00:42:00.753 | 1.00th=[ 196], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 235], 00:42:00.753 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:42:00.753 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:42:00.753 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 2966], 99.95th=[ 2966], 00:42:00.753 | 99.99th=[ 2966] 00:42:00.753 bw ( KiB/s): min= 4096, max= 4096, per=24.75%, avg=4096.00, stdev= 0.00, samples=1 00:42:00.753 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:00.753 lat (usec) : 250=77.34%, 500=18.35% 00:42:00.753 lat (msec) : 4=0.19%, 50=4.12% 00:42:00.753 cpu : usr=0.19%, sys=0.39%, ctx=537, majf=0, minf=1 00:42:00.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.753 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.753 job1: (groupid=0, jobs=1): err= 0: pid=3691247: Sun Nov 10 00:15:26 2024 00:42:00.753 read: IOPS=1453, BW=5814KiB/s (5954kB/s)(5820KiB/1001msec) 00:42:00.753 slat (nsec): min=4833, max=47149, avg=10964.92, stdev=4726.29 00:42:00.753 clat (usec): min=262, max=41013, avg=406.20, stdev=1539.03 00:42:00.753 lat (usec): min=267, max=41026, avg=417.16, stdev=1539.05 00:42:00.753 clat percentiles (usec): 00:42:00.753 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:42:00.754 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 379], 00:42:00.754 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 396], 95.00th=[ 400], 00:42:00.754 | 99.00th=[ 433], 99.50th=[ 453], 99.90th=[40633], 99.95th=[41157], 00:42:00.754 | 99.99th=[41157] 00:42:00.754 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:00.754 slat (nsec): min=6448, max=29939, avg=8004.91, stdev=2078.32 00:42:00.754 clat (usec): min=184, max=568, avg=241.13, stdev=59.47 00:42:00.754 lat (usec): min=191, max=576, avg=249.13, stdev=59.57 00:42:00.754 clat percentiles (usec): 00:42:00.754 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:42:00.754 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:42:00.754 | 70.00th=[ 239], 80.00th=[ 289], 90.00th=[ 363], 95.00th=[ 371], 00:42:00.754 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[ 478], 99.95th=[ 570], 00:42:00.754 | 99.99th=[ 570] 00:42:00.754 bw ( KiB/s): min= 6328, max= 6328, per=38.24%, avg=6328.00, stdev= 0.00, samples=1 00:42:00.754 iops : min= 1582, max= 1582, avg=1582.00, stdev= 0.00, samples=1 00:42:00.754 lat (usec) : 250=38.85%, 500=60.92%, 750=0.10% 00:42:00.754 lat (msec) : 4=0.03%, 20=0.03%, 50=0.07% 00:42:00.754 cpu : usr=1.50%, sys=2.90%, ctx=2993, majf=0, minf=1 00:42:00.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.754 issued rwts: total=1455,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.754 job2: (groupid=0, jobs=1): err= 0: pid=3691248: Sun Nov 10 00:15:26 2024 00:42:00.754 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:42:00.754 slat (nsec): min=4571, max=35455, avg=10783.29, stdev=4320.72 00:42:00.754 clat (usec): min=244, max=678, avg=349.58, stdev=57.39 00:42:00.754 lat (usec): min=251, max=692, avg=360.37, stdev=59.21 00:42:00.754 clat percentiles (usec): 00:42:00.754 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:42:00.754 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 359], 60.00th=[ 383], 00:42:00.754 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 449], 00:42:00.754 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 594], 99.95th=[ 676], 00:42:00.754 | 99.99th=[ 676] 00:42:00.754 write: IOPS=1728, BW=6913KiB/s (7079kB/s)(6920KiB/1001msec); 0 zone resets 00:42:00.754 slat (nsec): min=5995, max=45119, avg=7877.55, stdev=3096.69 00:42:00.754 clat (usec): min=191, max=520, avg=245.21, stdev=59.21 00:42:00.754 lat (usec): min=198, max=527, avg=253.09, stdev=59.73 00:42:00.754 clat percentiles (usec): 00:42:00.754 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:42:00.754 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:42:00.754 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 367], 95.00th=[ 375], 00:42:00.754 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 519], 00:42:00.754 | 99.99th=[ 519] 00:42:00.754 bw ( KiB/s): min= 8192, max= 8192, per=49.51%, avg=8192.00, stdev= 0.00, samples=1 00:42:00.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:00.754 lat (usec) : 250=40.51%, 500=59.25%, 750=0.24% 00:42:00.754 cpu : usr=1.40%, sys=3.30%, ctx=3266, majf=0, minf=2 00:42:00.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.754 issued rwts: total=1536,1730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.754 job3: (groupid=0, jobs=1): err= 0: pid=3691249: Sun Nov 10 00:15:26 2024 00:42:00.754 read: IOPS=28, BW=112KiB/s (115kB/s)(116KiB/1035msec) 00:42:00.754 slat (nsec): min=5890, max=17046, avg=12784.59, stdev=3309.92 00:42:00.754 clat (usec): min=306, max=42048, avg=30414.31, stdev=18442.23 00:42:00.754 lat (usec): min=314, max=42063, avg=30427.10, stdev=18444.91 00:42:00.754 clat percentiles (usec): 00:42:00.754 | 1.00th=[ 306], 5.00th=[ 433], 10.00th=[ 449], 20.00th=[ 570], 00:42:00.754 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:42:00.754 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:00.754 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:00.754 | 99.99th=[42206] 00:42:00.754 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:42:00.754 slat (nsec): min=6716, max=32210, avg=8694.77, stdev=3028.15 00:42:00.754 clat (usec): min=206, max=582, avg=281.77, stdev=51.98 00:42:00.754 lat (usec): min=214, max=590, avg=290.47, stdev=52.10 00:42:00.754 clat percentiles (usec): 00:42:00.754 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:42:00.754 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:42:00.754 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 392], 00:42:00.754 | 99.00th=[ 429], 99.50th=[ 461], 99.90th=[ 586], 99.95th=[ 586], 00:42:00.754 | 99.99th=[ 586] 00:42:00.754 bw ( KiB/s): min= 4096, max= 4096, per=24.75%, avg=4096.00, stdev= 0.00, samples=1 00:42:00.754 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:00.754 lat (usec) : 250=25.32%, 500=69.87%, 750=0.74% 00:42:00.754 lat (msec) : 10=0.18%, 50=3.88% 00:42:00.754 cpu : usr=0.10%, sys=0.48%, ctx=542, majf=0, minf=1 00:42:00.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.754 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.754 00:42:00.754 Run status group 0 (all jobs): 00:42:00.754 READ: bw=11.5MiB/s (12.0MB/s), 84.9KiB/s-6138KiB/s (86.9kB/s-6285kB/s), io=11.9MiB (12.5MB), run=1001-1037msec 00:42:00.754 WRITE: bw=16.2MiB/s (16.9MB/s), 1975KiB/s-6913KiB/s (2022kB/s-7079kB/s), io=16.8MiB (17.6MB), run=1001-1037msec 00:42:00.754 00:42:00.754 Disk stats (read/write): 00:42:00.754 nvme0n1: ios=39/512, merge=0/0, ticks=1534/127, in_queue=1661, util=85.17% 00:42:00.754 nvme0n2: ios=1053/1536, merge=0/0, ticks=1329/363, in_queue=1692, util=89.11% 00:42:00.754 nvme0n3: ios=1296/1536, merge=0/0, ticks=506/370, in_queue=876, util=95.08% 00:42:00.754 nvme0n4: ios=51/512, merge=0/0, ticks=1582/144, in_queue=1726, util=94.09% 00:42:00.754 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:00.754 [global] 00:42:00.754 thread=1 00:42:00.754 invalidate=1 00:42:00.754 rw=randwrite 00:42:00.754 time_based=1 00:42:00.754 runtime=1 00:42:00.754 ioengine=libaio 00:42:00.754 direct=1 00:42:00.754 bs=4096 00:42:00.754 iodepth=1 00:42:00.754 norandommap=0 00:42:00.754 numjobs=1 00:42:00.754 00:42:00.754 verify_dump=1 00:42:00.754 verify_backlog=512 00:42:00.754 verify_state_save=0 00:42:00.754 do_verify=1 00:42:00.754 verify=crc32c-intel 00:42:00.754 [job0] 00:42:00.754 filename=/dev/nvme0n1 00:42:00.754 [job1] 00:42:00.754 filename=/dev/nvme0n2 00:42:00.754 [job2] 00:42:00.754 filename=/dev/nvme0n3 00:42:00.754 [job3] 00:42:00.754 filename=/dev/nvme0n4 00:42:00.754 Could not set queue depth (nvme0n1) 00:42:00.754 Could not set queue depth (nvme0n2) 00:42:00.754 Could not set queue depth (nvme0n3) 00:42:00.754 Could not set queue depth (nvme0n4) 00:42:00.754 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.754 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.754 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.754 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:00.754 fio-3.35 00:42:00.754 Starting 4 threads 00:42:02.128 00:42:02.128 job0: (groupid=0, jobs=1): err= 0: pid=3691482: Sun Nov 10 00:15:28 2024 00:42:02.128 read: IOPS=19, BW=79.4KiB/s (81.3kB/s)(80.0KiB/1008msec) 00:42:02.128 slat (nsec): min=12450, max=36308, avg=19621.85, stdev=9717.51 00:42:02.128 clat (usec): min=40844, max=42027, avg=41647.84, stdev=473.01 00:42:02.128 lat (usec): min=40880, max=42064, avg=41667.46, stdev=471.00 00:42:02.128 clat percentiles (usec): 00:42:02.128 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:02.128 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:42:02.128 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:02.128 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:02.128 | 99.99th=[42206] 00:42:02.128 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:42:02.128 slat (nsec): min=7444, max=63083, avg=15214.12, stdev=8715.88 00:42:02.128 clat (usec): min=216, max=751, avg=321.08, stdev=57.45 00:42:02.128 lat (usec): min=232, max=760, avg=336.29, stdev=57.86 00:42:02.128 clat percentiles (usec): 00:42:02.128 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 285], 00:42:02.128 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:42:02.128 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 400], 95.00th=[ 429], 00:42:02.128 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 750], 99.95th=[ 750], 00:42:02.128 | 99.99th=[ 750] 00:42:02.128 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:42:02.128 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:02.128 lat (usec) : 250=4.70%, 500=90.60%, 750=0.75%, 1000=0.19% 00:42:02.128 lat (msec) : 50=3.76% 00:42:02.128 cpu : usr=0.70%, sys=0.89%, ctx=533, majf=0, minf=1 00:42:02.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.128 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.128 job1: (groupid=0, jobs=1): err= 0: pid=3691483: Sun Nov 10 00:15:28 2024 00:42:02.128 read: IOPS=126, BW=507KiB/s (519kB/s)(524KiB/1033msec) 00:42:02.128 slat (nsec): min=4500, max=32637, avg=13321.02, stdev=7216.87 00:42:02.128 clat (usec): min=285, max=41998, avg=6919.82, stdev=15079.84 00:42:02.128 lat (usec): min=290, max=42011, avg=6933.14, stdev=15082.77 00:42:02.128 clat percentiles (usec): 00:42:02.128 | 1.00th=[ 289], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:42:02.128 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 375], 00:42:02.128 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[41157], 95.00th=[42206], 00:42:02.128 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:02.128 | 99.99th=[42206] 00:42:02.128 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:42:02.128 slat (nsec): min=5609, max=36454, avg=9794.29, stdev=4852.87 00:42:02.128 clat (usec): min=192, max=920, avg=229.94, stdev=38.44 00:42:02.128 lat (usec): min=199, max=927, avg=239.73, stdev=39.47 00:42:02.128 clat percentiles (usec): 00:42:02.128 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 215], 00:42:02.128 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:42:02.128 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 265], 00:42:02.128 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 922], 99.95th=[ 922], 00:42:02.128 | 99.99th=[ 922] 00:42:02.128 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:42:02.128 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:02.128 lat (usec) : 250=72.63%, 500=23.95%, 1000=0.16% 00:42:02.128 lat (msec) : 50=3.27% 00:42:02.128 cpu : usr=0.10%, sys=0.87%, ctx=643, majf=0, minf=2 00:42:02.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.128 issued rwts: total=131,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.128 job2: (groupid=0, jobs=1): err= 0: pid=3691485: Sun Nov 10 00:15:28 2024 00:42:02.128 read: IOPS=1524, BW=6098KiB/s (6244kB/s)(6104KiB/1001msec) 00:42:02.128 slat (nsec): min=5902, max=71923, avg=17188.03, stdev=8769.45 00:42:02.128 clat (usec): min=262, max=657, avg=373.04, stdev=82.12 00:42:02.128 lat (usec): min=269, max=664, avg=390.23, stdev=81.53 00:42:02.128 clat percentiles (usec): 00:42:02.129 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 306], 00:42:02.129 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 367], 00:42:02.129 | 70.00th=[ 392], 80.00th=[ 445], 90.00th=[ 502], 95.00th=[ 545], 00:42:02.129 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 644], 99.95th=[ 660], 00:42:02.129 | 99.99th=[ 660] 00:42:02.129 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:02.129 slat (nsec): min=6827, max=48447, avg=16146.23, stdev=6023.40 00:42:02.129 clat (usec): min=196, max=394, avg=237.65, stdev=25.74 00:42:02.129 lat (usec): min=204, max=403, avg=253.80, stdev=25.85 00:42:02.129 clat percentiles (usec): 00:42:02.129 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 215], 00:42:02.129 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:42:02.129 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:42:02.129 | 99.00th=[ 306], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 396], 00:42:02.129 | 99.99th=[ 396] 00:42:02.129 bw ( KiB/s): min= 7672, max= 7672, per=48.37%, avg=7672.00, stdev= 0.00, samples=1 00:42:02.129 iops : min= 1918, max= 1918, avg=1918.00, stdev= 0.00, samples=1 00:42:02.129 lat (usec) : 250=36.90%, 500=57.74%, 750=5.36% 00:42:02.129 cpu : usr=3.00%, sys=6.30%, ctx=3064, majf=0, minf=1 00:42:02.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.129 issued rwts: total=1526,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.129 job3: (groupid=0, jobs=1): err= 0: pid=3691486: Sun Nov 10 00:15:28 2024 00:42:02.129 read: IOPS=1455, BW=5822KiB/s (5962kB/s)(5828KiB/1001msec) 00:42:02.129 slat (nsec): min=5828, max=44321, avg=13246.42, stdev=6143.30 00:42:02.129 clat (usec): min=280, max=680, avg=342.25, stdev=38.04 00:42:02.129 lat (usec): min=286, max=698, avg=355.49, stdev=41.49 00:42:02.129 clat percentiles (usec): 00:42:02.129 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:42:02.129 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:42:02.129 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 416], 00:42:02.129 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[ 529], 99.95th=[ 685], 00:42:02.129 | 99.99th=[ 685] 00:42:02.129 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:02.129 slat (nsec): min=6864, max=63412, avg=17916.93, stdev=7528.05 00:42:02.129 clat (usec): min=206, max=1864, avg=286.96, stdev=58.51 00:42:02.129 lat (usec): min=221, max=1875, avg=304.88, stdev=58.14 00:42:02.129 clat percentiles (usec): 00:42:02.129 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 255], 00:42:02.129 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:42:02.129 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 383], 00:42:02.129 | 99.00th=[ 449], 99.50th=[ 474], 99.90th=[ 523], 99.95th=[ 1860], 00:42:02.129 | 99.99th=[ 1860] 00:42:02.129 bw ( KiB/s): min= 8136, max= 8136, per=51.30%, avg=8136.00, stdev= 0.00, samples=1 00:42:02.129 iops : min= 2034, max= 2034, avg=2034.00, stdev= 0.00, samples=1 00:42:02.129 lat (usec) : 250=8.75%, 500=90.78%, 750=0.43% 00:42:02.129 lat (msec) : 2=0.03% 00:42:02.129 cpu : usr=4.30%, sys=5.60%, ctx=2993, majf=0, minf=1 00:42:02.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.129 issued rwts: total=1457,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.129 00:42:02.129 Run status group 0 (all jobs): 00:42:02.129 READ: bw=11.9MiB/s (12.4MB/s), 79.4KiB/s-6098KiB/s (81.3kB/s-6244kB/s), io=12.2MiB (12.8MB), run=1001-1033msec 00:42:02.129 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-6138KiB/s (2030kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:42:02.129 00:42:02.129 Disk stats (read/write): 00:42:02.129 nvme0n1: ios=66/512, merge=0/0, ticks=702/151, in_queue=853, util=88.08% 00:42:02.129 nvme0n2: ios=176/512, merge=0/0, ticks=776/113, in_queue=889, util=91.37% 00:42:02.129 nvme0n3: ios=1145/1536, merge=0/0, ticks=941/340, in_queue=1281, util=100.00% 00:42:02.129 nvme0n4: ios=1100/1536, merge=0/0, ticks=448/420, in_queue=868, util=96.12% 00:42:02.129 00:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:02.129 [global] 00:42:02.129 thread=1 00:42:02.129 invalidate=1 00:42:02.129 rw=write 00:42:02.129 time_based=1 00:42:02.129 runtime=1 00:42:02.129 ioengine=libaio 00:42:02.129 direct=1 00:42:02.129 bs=4096 00:42:02.129 iodepth=128 00:42:02.129 norandommap=0 00:42:02.129 numjobs=1 00:42:02.129 00:42:02.129 verify_dump=1 00:42:02.129 verify_backlog=512 00:42:02.129 verify_state_save=0 00:42:02.129 do_verify=1 00:42:02.129 verify=crc32c-intel 00:42:02.129 [job0] 00:42:02.129 filename=/dev/nvme0n1 00:42:02.129 [job1] 00:42:02.129 filename=/dev/nvme0n2 00:42:02.129 [job2] 00:42:02.129 filename=/dev/nvme0n3 00:42:02.129 [job3] 00:42:02.129 filename=/dev/nvme0n4 00:42:02.129 Could not set queue depth (nvme0n1) 00:42:02.129 Could not set queue depth (nvme0n2) 00:42:02.129 Could not set queue depth (nvme0n3) 00:42:02.129 Could not set queue depth (nvme0n4) 00:42:02.387 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.387 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.387 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.387 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:02.387 fio-3.35 00:42:02.387 Starting 4 threads 00:42:03.761 00:42:03.761 job0: (groupid=0, jobs=1): err= 0: pid=3691758: Sun Nov 10 00:15:29 2024 00:42:03.761 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1008msec) 00:42:03.761 slat (usec): min=2, max=16402, avg=116.01, stdev=1017.74 00:42:03.761 clat (usec): min=4313, max=35245, avg=14303.36, stdev=3495.85 00:42:03.761 lat (usec): min=4321, max=35261, avg=14419.37, stdev=3605.31 00:42:03.761 clat percentiles (usec): 00:42:03.761 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[12125], 00:42:03.761 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:42:03.761 | 70.00th=[14877], 80.00th=[16057], 90.00th=[18744], 95.00th=[21627], 00:42:03.761 | 99.00th=[26084], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:42:03.761 | 99.99th=[35390] 00:42:03.761 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:42:03.761 slat (usec): min=3, max=40975, avg=107.15, stdev=1009.49 00:42:03.761 clat (usec): min=1679, max=42657, avg=13402.53, stdev=3547.45 00:42:03.761 lat (usec): min=1694, max=58856, avg=13509.69, stdev=3662.61 00:42:03.761 clat percentiles (usec): 00:42:03.761 | 1.00th=[ 4359], 5.00th=[ 7177], 10.00th=[ 8586], 20.00th=[11338], 00:42:03.761 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13304], 60.00th=[14091], 00:42:03.761 | 70.00th=[14615], 80.00th=[15139], 90.00th=[18482], 95.00th=[19268], 00:42:03.761 | 99.00th=[22414], 99.50th=[22676], 99.90th=[26608], 99.95th=[28705], 00:42:03.761 | 99.99th=[42730] 00:42:03.761 bw ( KiB/s): min=16584, max=19296, per=35.34%, avg=17940.00, stdev=1917.67, samples=2 00:42:03.761 iops : min= 4146, max= 4824, avg=4485.00, stdev=479.42, samples=2 00:42:03.761 lat (msec) : 2=0.06%, 4=0.23%, 10=12.01%, 20=82.54%, 50=5.16% 00:42:03.761 cpu : usr=3.28%, sys=5.66%, ctx=269, majf=0, minf=1 00:42:03.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:03.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.761 issued rwts: total=4100,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.761 job1: (groupid=0, jobs=1): err= 0: pid=3691777: Sun Nov 10 00:15:29 2024 00:42:03.761 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:42:03.761 slat (usec): min=3, max=17757, avg=192.95, stdev=1239.57 00:42:03.761 clat (usec): min=13011, max=66475, avg=25515.78, stdev=13008.55 00:42:03.761 lat (usec): min=13017, max=66482, avg=25708.73, stdev=13102.64 00:42:03.761 clat percentiles (usec): 00:42:03.761 | 1.00th=[13435], 5.00th=[14877], 10.00th=[15664], 20.00th=[16319], 00:42:03.761 | 30.00th=[16450], 40.00th=[16712], 50.00th=[17957], 60.00th=[20317], 00:42:03.761 | 70.00th=[28705], 80.00th=[39584], 90.00th=[48497], 95.00th=[52691], 00:42:03.761 | 99.00th=[55313], 99.50th=[58459], 99.90th=[66323], 99.95th=[66323], 00:42:03.761 | 99.99th=[66323] 00:42:03.761 write: IOPS=2729, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1004msec); 0 zone resets 00:42:03.761 slat (usec): min=4, max=17800, avg=178.45, stdev=1081.80 00:42:03.761 clat (usec): min=2431, max=70720, avg=21708.11, stdev=11646.46 00:42:03.761 lat (usec): min=9868, max=70733, avg=21886.56, stdev=11747.94 00:42:03.761 clat percentiles (usec): 00:42:03.761 | 1.00th=[10159], 5.00th=[13698], 10.00th=[15270], 20.00th=[15926], 00:42:03.761 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16712], 60.00th=[17433], 00:42:03.761 | 70.00th=[20055], 80.00th=[22414], 90.00th=[38536], 95.00th=[51643], 00:42:03.761 | 99.00th=[66323], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:42:03.761 | 99.99th=[70779] 00:42:03.761 bw ( KiB/s): min= 8192, max=12704, per=20.58%, avg=10448.00, stdev=3190.47, samples=2 00:42:03.761 iops : min= 2048, max= 3176, avg=2612.00, stdev=797.62, samples=2 00:42:03.761 lat (msec) : 4=0.02%, 10=0.23%, 20=63.66%, 50=28.47%, 100=7.62% 00:42:03.761 cpu : usr=1.89%, sys=3.89%, ctx=199, majf=0, minf=1 00:42:03.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:42:03.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.761 issued rwts: total=2560,2740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.761 job2: (groupid=0, jobs=1): err= 0: pid=3691806: Sun Nov 10 00:15:29 2024 00:42:03.761 read: IOPS=1513, BW=6053KiB/s (6198kB/s)(6144KiB/1015msec) 00:42:03.761 slat (usec): min=3, max=20804, avg=224.25, stdev=1376.70 00:42:03.761 clat (usec): min=7508, max=89178, avg=25307.58, stdev=15607.69 00:42:03.761 lat (usec): min=7516, max=89185, avg=25531.83, stdev=15741.90 00:42:03.761 clat percentiles (usec): 00:42:03.761 | 1.00th=[12649], 5.00th=[13435], 10.00th=[14222], 20.00th=[16319], 00:42:03.761 | 30.00th=[17433], 40.00th=[18744], 50.00th=[19530], 60.00th=[21365], 00:42:03.761 | 70.00th=[24511], 80.00th=[28967], 90.00th=[44827], 95.00th=[66323], 00:42:03.761 | 99.00th=[82314], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:42:03.761 | 99.99th=[89654] 00:42:03.761 write: IOPS=1920, BW=7681KiB/s (7865kB/s)(7796KiB/1015msec); 0 zone resets 00:42:03.761 slat (usec): min=3, max=31575, avg=330.54, stdev=1678.48 00:42:03.761 clat (usec): min=5599, max=89182, avg=43322.75, stdev=20595.51 00:42:03.761 lat (usec): min=5606, max=89192, avg=43653.29, stdev=20708.98 00:42:03.762 clat percentiles (usec): 00:42:03.762 | 1.00th=[ 7963], 5.00th=[14353], 10.00th=[15139], 20.00th=[20579], 00:42:03.762 | 30.00th=[28967], 40.00th=[31065], 50.00th=[40633], 60.00th=[53740], 00:42:03.762 | 70.00th=[63177], 80.00th=[65274], 90.00th=[67634], 95.00th=[70779], 00:42:03.762 | 99.00th=[77071], 99.50th=[79168], 99.90th=[89654], 99.95th=[89654], 00:42:03.762 | 99.99th=[89654] 00:42:03.762 bw ( KiB/s): min= 6376, max= 8192, per=14.35%, avg=7284.00, stdev=1284.11, samples=2 00:42:03.762 iops : min= 1594, max= 2048, avg=1821.00, stdev=321.03, samples=2 00:42:03.762 lat (msec) : 10=1.15%, 20=34.98%, 50=34.38%, 100=29.50% 00:42:03.762 cpu : usr=1.48%, sys=2.37%, ctx=212, majf=0, minf=1 00:42:03.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:42:03.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.762 issued rwts: total=1536,1949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.762 job3: (groupid=0, jobs=1): err= 0: pid=3691816: Sun Nov 10 00:15:29 2024 00:42:03.762 read: IOPS=3104, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1002msec) 00:42:03.762 slat (usec): min=2, max=27658, avg=161.02, stdev=1111.42 00:42:03.762 clat (usec): min=750, max=87075, avg=19203.68, stdev=10765.88 00:42:03.762 lat (usec): min=5579, max=87086, avg=19364.70, stdev=10845.15 00:42:03.762 clat percentiles (usec): 00:42:03.762 | 1.00th=[ 5735], 5.00th=[12649], 10.00th=[13829], 20.00th=[15401], 00:42:03.762 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16450], 60.00th=[16581], 00:42:03.762 | 70.00th=[16909], 80.00th=[17171], 90.00th=[29754], 95.00th=[40109], 00:42:03.762 | 99.00th=[68682], 99.50th=[77071], 99.90th=[85459], 99.95th=[85459], 00:42:03.762 | 99.99th=[87557] 00:42:03.762 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:42:03.762 slat (usec): min=3, max=12932, avg=135.52, stdev=808.88 00:42:03.762 clat (usec): min=5789, max=79081, avg=18674.70, stdev=7798.07 00:42:03.762 lat (usec): min=5795, max=79090, avg=18810.22, stdev=7857.64 00:42:03.762 clat percentiles (usec): 00:42:03.762 | 1.00th=[10552], 5.00th=[11863], 10.00th=[14615], 20.00th=[14877], 00:42:03.762 | 30.00th=[15139], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:42:03.762 | 70.00th=[17171], 80.00th=[22676], 90.00th=[25297], 95.00th=[30278], 00:42:03.762 | 99.00th=[62653], 99.50th=[63177], 99.90th=[68682], 99.95th=[70779], 00:42:03.762 | 99.99th=[79168] 00:42:03.762 bw ( KiB/s): min=12288, max=15680, per=27.55%, avg=13984.00, stdev=2398.51, samples=2 00:42:03.762 iops : min= 3072, max= 3920, avg=3496.00, stdev=599.63, samples=2 00:42:03.762 lat (usec) : 1000=0.01% 00:42:03.762 lat (msec) : 10=0.66%, 20=80.39%, 50=16.40%, 100=2.54% 00:42:03.762 cpu : usr=1.30%, sys=3.00%, ctx=273, majf=0, minf=2 00:42:03.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:03.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:03.762 issued rwts: total=3111,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:03.762 00:42:03.762 Run status group 0 (all jobs): 00:42:03.762 READ: bw=43.5MiB/s (45.6MB/s), 6053KiB/s-15.9MiB/s (6198kB/s-16.7MB/s), io=44.2MiB (46.3MB), run=1002-1015msec 00:42:03.762 WRITE: bw=49.6MiB/s (52.0MB/s), 7681KiB/s-17.9MiB/s (7865kB/s-18.7MB/s), io=50.3MiB (52.8MB), run=1002-1015msec 00:42:03.762 00:42:03.762 Disk stats (read/write): 00:42:03.762 nvme0n1: ios=3636/3655, merge=0/0, ticks=49919/47186, in_queue=97105, util=91.78% 00:42:03.762 nvme0n2: ios=2089/2049, merge=0/0, ticks=27687/23328, in_queue=51015, util=95.83% 00:42:03.762 nvme0n3: ios=1527/1536, merge=0/0, ticks=38270/56227, in_queue=94497, util=100.00% 00:42:03.762 nvme0n4: ios=2560/2835, merge=0/0, ticks=15682/14655, in_queue=30337, util=89.65% 00:42:03.762 00:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:03.762 [global] 00:42:03.762 thread=1 00:42:03.762 invalidate=1 00:42:03.762 rw=randwrite 00:42:03.762 time_based=1 00:42:03.762 runtime=1 00:42:03.762 ioengine=libaio 00:42:03.762 direct=1 00:42:03.762 bs=4096 00:42:03.762 iodepth=128 00:42:03.762 norandommap=0 00:42:03.762 numjobs=1 00:42:03.762 00:42:03.762 verify_dump=1 00:42:03.762 verify_backlog=512 00:42:03.762 verify_state_save=0 00:42:03.762 do_verify=1 00:42:03.762 verify=crc32c-intel 00:42:03.762 [job0] 00:42:03.762 filename=/dev/nvme0n1 00:42:03.762 [job1] 00:42:03.762 filename=/dev/nvme0n2 00:42:03.762 [job2] 00:42:03.762 filename=/dev/nvme0n3 00:42:03.762 [job3] 00:42:03.762 filename=/dev/nvme0n4 00:42:03.762 Could not set queue depth (nvme0n1) 00:42:03.762 Could not set queue depth (nvme0n2) 00:42:03.762 Could not set queue depth (nvme0n3) 00:42:03.762 Could not set queue depth (nvme0n4) 00:42:03.762 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.762 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.762 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.762 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:03.762 fio-3.35 00:42:03.762 Starting 4 threads 00:42:05.148 00:42:05.148 job0: (groupid=0, jobs=1): err= 0: pid=3692064: Sun Nov 10 00:15:30 2024 00:42:05.148 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:42:05.148 slat (usec): min=2, max=15576, avg=140.03, stdev=873.62 00:42:05.148 clat (usec): min=8598, max=99908, avg=17806.71, stdev=12089.68 00:42:05.148 lat (usec): min=8601, max=99912, avg=17946.74, stdev=12167.30 00:42:05.148 clat percentiles (msec): 00:42:05.148 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:42:05.148 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:42:05.148 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 38], 00:42:05.148 | 99.00th=[ 84], 99.50th=[ 88], 99.90th=[ 101], 99.95th=[ 101], 00:42:05.148 | 99.99th=[ 101] 00:42:05.148 write: IOPS=4081, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:42:05.148 slat (usec): min=3, max=12517, avg=116.52, stdev=692.58 00:42:05.148 clat (usec): min=501, max=29664, avg=15133.01, stdev=2939.22 00:42:05.148 lat (usec): min=5149, max=29668, avg=15249.54, stdev=2983.42 00:42:05.148 clat percentiles (usec): 00:42:05.148 | 1.00th=[ 5407], 5.00th=[11600], 10.00th=[13042], 20.00th=[13566], 00:42:05.148 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14615], 60.00th=[14877], 00:42:05.148 | 70.00th=[15139], 80.00th=[16581], 90.00th=[19530], 95.00th=[20317], 00:42:05.148 | 99.00th=[23725], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:42:05.148 | 99.99th=[29754] 00:42:05.148 bw ( KiB/s): min=15280, max=16416, per=27.45%, avg=15848.00, stdev=803.27, samples=2 00:42:05.148 iops : min= 3820, max= 4104, avg=3962.00, stdev=200.82, samples=2 00:42:05.148 lat (usec) : 750=0.01% 00:42:05.148 lat (msec) : 10=2.14%, 20=88.79%, 50=7.38%, 100=1.68% 00:42:05.148 cpu : usr=2.80%, sys=4.10%, ctx=379, majf=0, minf=1 00:42:05.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:05.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.148 issued rwts: total=3584,4090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.148 job1: (groupid=0, jobs=1): err= 0: pid=3692065: Sun Nov 10 00:15:30 2024 00:42:05.148 read: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.1MiB/1008msec) 00:42:05.148 slat (usec): min=2, max=41443, avg=155.64, stdev=1226.88 00:42:05.148 clat (usec): min=5383, max=73549, avg=19061.20, stdev=10594.86 00:42:05.148 lat (usec): min=9822, max=78147, avg=19216.83, stdev=10641.55 00:42:05.148 clat percentiles (usec): 00:42:05.148 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11863], 20.00th=[13435], 00:42:05.148 | 30.00th=[14353], 40.00th=[15139], 50.00th=[16188], 60.00th=[17695], 00:42:05.148 | 70.00th=[19268], 80.00th=[20579], 90.00th=[26346], 95.00th=[32900], 00:42:05.148 | 99.00th=[66323], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:42:05.148 | 99.99th=[73925] 00:42:05.148 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:42:05.148 slat (usec): min=3, max=20422, avg=137.51, stdev=947.44 00:42:05.148 clat (usec): min=7061, max=81688, avg=19068.46, stdev=10234.17 00:42:05.148 lat (usec): min=7070, max=81692, avg=19205.97, stdev=10293.05 00:42:05.148 clat percentiles (usec): 00:42:05.148 | 1.00th=[ 8848], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:42:05.148 | 30.00th=[14091], 40.00th=[14353], 50.00th=[15533], 60.00th=[17433], 00:42:05.148 | 70.00th=[19792], 80.00th=[23725], 90.00th=[26870], 95.00th=[31327], 00:42:05.148 | 99.00th=[65799], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:42:05.148 | 99.99th=[81265] 00:42:05.148 bw ( KiB/s): min=11400, max=16408, per=24.08%, avg=13904.00, stdev=3541.19, samples=2 00:42:05.148 iops : min= 2850, max= 4102, avg=3476.00, stdev=885.30, samples=2 00:42:05.148 lat (msec) : 10=1.50%, 20=71.31%, 50=23.53%, 100=3.66% 00:42:05.148 cpu : usr=3.57%, sys=5.56%, ctx=244, majf=0, minf=2 00:42:05.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:05.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.148 issued rwts: total=3088,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.148 job2: (groupid=0, jobs=1): err= 0: pid=3692066: Sun Nov 10 00:15:30 2024 00:42:05.148 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:42:05.148 slat (usec): min=2, max=32342, avg=165.22, stdev=1458.19 00:42:05.148 clat (usec): min=3957, max=84868, avg=20657.60, stdev=11091.06 00:42:05.148 lat (usec): min=3964, max=84877, avg=20822.82, stdev=11163.88 00:42:05.148 clat percentiles (usec): 00:42:05.148 | 1.00th=[ 8291], 5.00th=[12649], 10.00th=[13698], 20.00th=[14484], 00:42:05.148 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15664], 60.00th=[17171], 00:42:05.148 | 70.00th=[19530], 80.00th=[25822], 90.00th=[37487], 95.00th=[45876], 00:42:05.148 | 99.00th=[70779], 99.50th=[77071], 99.90th=[79168], 99.95th=[84411], 00:42:05.148 | 99.99th=[84411] 00:42:05.148 write: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1012msec); 0 zone resets 00:42:05.148 slat (usec): min=3, max=25451, avg=141.07, stdev=1152.29 00:42:05.148 clat (usec): min=2371, max=72976, avg=19287.22, stdev=9062.64 00:42:05.148 lat (usec): min=2597, max=72980, avg=19428.29, stdev=9154.94 00:42:05.148 clat percentiles (usec): 00:42:05.148 | 1.00th=[ 5735], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[12780], 00:42:05.148 | 30.00th=[14746], 40.00th=[15664], 50.00th=[16581], 60.00th=[17695], 00:42:05.148 | 70.00th=[22414], 80.00th=[24773], 90.00th=[26870], 95.00th=[42206], 00:42:05.148 | 99.00th=[51643], 99.50th=[54264], 99.90th=[55837], 99.95th=[67634], 00:42:05.148 | 99.99th=[72877] 00:42:05.148 bw ( KiB/s): min=10800, max=15104, per=22.43%, avg=12952.00, stdev=3043.39, samples=2 00:42:05.148 iops : min= 2700, max= 3776, avg=3238.00, stdev=760.85, samples=2 00:42:05.148 lat (msec) : 4=0.30%, 10=3.51%, 20=64.12%, 50=29.84%, 100=2.24% 00:42:05.148 cpu : usr=3.26%, sys=3.96%, ctx=232, majf=0, minf=1 00:42:05.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:42:05.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.148 issued rwts: total=3072,3366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.148 job3: (groupid=0, jobs=1): err= 0: pid=3692067: Sun Nov 10 00:15:30 2024 00:42:05.148 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:42:05.148 slat (usec): min=2, max=21353, avg=153.38, stdev=1283.36 00:42:05.148 clat (usec): min=4506, max=47559, avg=20729.16, stdev=6837.24 00:42:05.148 lat (usec): min=4511, max=47573, avg=20882.55, stdev=6907.98 00:42:05.148 clat percentiles (usec): 00:42:05.148 | 1.00th=[ 7963], 5.00th=[12780], 10.00th=[14353], 20.00th=[15795], 00:42:05.148 | 30.00th=[16712], 40.00th=[17695], 50.00th=[17957], 60.00th=[20055], 00:42:05.148 | 70.00th=[24511], 80.00th=[26084], 90.00th=[30278], 95.00th=[32900], 00:42:05.148 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[47449], 00:42:05.148 | 99.99th=[47449] 00:42:05.148 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1010msec); 0 zone resets 00:42:05.148 slat (usec): min=3, max=25353, avg=138.37, stdev=1194.16 00:42:05.148 clat (usec): min=3013, max=40292, avg=18046.76, stdev=5214.47 00:42:05.148 lat (usec): min=3020, max=42646, avg=18185.13, stdev=5295.12 00:42:05.148 clat percentiles (usec): 00:42:05.149 | 1.00th=[ 5538], 5.00th=[ 9372], 10.00th=[11994], 20.00th=[14353], 00:42:05.149 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17433], 60.00th=[18220], 00:42:05.149 | 70.00th=[19792], 80.00th=[22938], 90.00th=[25035], 95.00th=[26608], 00:42:05.149 | 99.00th=[31327], 99.50th=[31327], 99.90th=[36439], 99.95th=[39060], 00:42:05.149 | 99.99th=[40109] 00:42:05.149 bw ( KiB/s): min=12464, max=15048, per=23.83%, avg=13756.00, stdev=1827.16, samples=2 00:42:05.149 iops : min= 3116, max= 3762, avg=3439.00, stdev=456.79, samples=2 00:42:05.149 lat (msec) : 4=0.09%, 10=4.49%, 20=60.76%, 50=34.66% 00:42:05.149 cpu : usr=3.87%, sys=5.35%, ctx=179, majf=0, minf=1 00:42:05.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:05.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:05.149 issued rwts: total=3072,3567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:05.149 00:42:05.149 Run status group 0 (all jobs): 00:42:05.149 READ: bw=49.5MiB/s (51.9MB/s), 11.9MiB/s-14.0MiB/s (12.4MB/s-14.7MB/s), io=50.1MiB (52.5MB), run=1002-1012msec 00:42:05.149 WRITE: bw=56.4MiB/s (59.1MB/s), 13.0MiB/s-15.9MiB/s (13.6MB/s-16.7MB/s), io=57.1MiB (59.8MB), run=1002-1012msec 00:42:05.149 00:42:05.149 Disk stats (read/write): 00:42:05.149 nvme0n1: ios=3368/3584, merge=0/0, ticks=16819/18537, in_queue=35356, util=91.58% 00:42:05.149 nvme0n2: ios=2992/3072, merge=0/0, ticks=25687/22852, in_queue=48539, util=94.31% 00:42:05.149 nvme0n3: ios=2602/2560, merge=0/0, ticks=45221/40038, in_queue=85259, util=97.71% 00:42:05.149 nvme0n4: ios=2623/2991, merge=0/0, ticks=52681/50659, in_queue=103340, util=100.00% 00:42:05.149 00:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:05.149 00:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3692202 00:42:05.149 00:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:05.149 00:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:05.149 [global] 00:42:05.149 thread=1 00:42:05.149 invalidate=1 00:42:05.149 rw=read 00:42:05.149 time_based=1 00:42:05.149 runtime=10 00:42:05.149 ioengine=libaio 00:42:05.149 direct=1 00:42:05.149 bs=4096 00:42:05.149 iodepth=1 00:42:05.149 norandommap=1 00:42:05.149 numjobs=1 00:42:05.149 00:42:05.149 [job0] 00:42:05.149 filename=/dev/nvme0n1 00:42:05.149 [job1] 00:42:05.149 filename=/dev/nvme0n2 00:42:05.149 [job2] 00:42:05.149 filename=/dev/nvme0n3 00:42:05.149 [job3] 00:42:05.149 filename=/dev/nvme0n4 00:42:05.149 Could not set queue depth (nvme0n1) 00:42:05.149 Could not set queue depth (nvme0n2) 00:42:05.149 Could not set queue depth (nvme0n3) 00:42:05.149 Could not set queue depth (nvme0n4) 00:42:05.149 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.149 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.149 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.149 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:05.149 fio-3.35 00:42:05.149 Starting 4 threads 00:42:08.427 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:08.427 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:08.427 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34033664, buflen=4096 00:42:08.427 fio: pid=3692293, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:08.427 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:08.427 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:08.685 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=331776, buflen=4096 00:42:08.685 fio: pid=3692292, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:08.943 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41295872, buflen=4096 00:42:08.943 fio: pid=3692290, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:08.943 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:08.943 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:09.201 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14782464, buflen=4096 00:42:09.201 fio: pid=3692291, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:09.201 00:42:09.201 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3692290: Sun Nov 10 00:15:35 2024 00:42:09.201 read: IOPS=2890, BW=11.3MiB/s (11.8MB/s)(39.4MiB/3488msec) 00:42:09.201 slat (usec): min=5, max=11730, avg=10.90, stdev=164.41 00:42:09.201 clat (usec): min=252, max=41274, avg=330.34, stdev=991.37 00:42:09.201 lat (usec): min=258, max=41280, avg=341.24, stdev=1005.08 00:42:09.201 clat percentiles (usec): 00:42:09.201 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:42:09.201 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:42:09.201 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 359], 00:42:09.201 | 99.00th=[ 469], 99.50th=[ 519], 99.90th=[ 914], 99.95th=[40633], 00:42:09.201 | 99.99th=[41157] 00:42:09.201 bw ( KiB/s): min= 9976, max=13112, per=49.90%, avg=11532.00, stdev=1122.53, samples=6 00:42:09.201 iops : min= 2494, max= 3278, avg=2883.00, stdev=280.63, samples=6 00:42:09.201 lat (usec) : 500=99.34%, 750=0.51%, 1000=0.06% 00:42:09.201 lat (msec) : 2=0.02%, 4=0.01%, 50=0.06% 00:42:09.201 cpu : usr=1.58%, sys=4.13%, ctx=10086, majf=0, minf=2 00:42:09.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.201 issued rwts: total=10083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.201 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3692291: Sun Nov 10 00:15:35 2024 00:42:09.201 read: IOPS=944, BW=3777KiB/s (3868kB/s)(14.1MiB/3822msec) 00:42:09.201 slat (usec): min=4, max=12626, avg=23.81, stdev=407.17 00:42:09.201 clat (usec): min=236, max=42102, avg=1030.43, stdev=5487.72 00:42:09.201 lat (usec): min=242, max=42114, avg=1054.25, stdev=5502.43 00:42:09.201 clat percentiles (usec): 00:42:09.201 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 258], 20.00th=[ 262], 00:42:09.201 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:42:09.201 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 392], 00:42:09.201 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:09.201 | 99.99th=[42206] 00:42:09.201 bw ( KiB/s): min= 96, max=12227, per=13.14%, avg=3037.00, stdev=5131.33, samples=7 00:42:09.201 iops : min= 24, max= 3056, avg=759.14, stdev=1282.61, samples=7 00:42:09.201 lat (usec) : 250=1.52%, 500=96.07%, 750=0.42%, 1000=0.08% 00:42:09.201 lat (msec) : 2=0.08%, 10=0.03%, 50=1.77% 00:42:09.201 cpu : usr=0.39%, sys=1.13%, ctx=3617, majf=0, minf=1 00:42:09.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.201 issued rwts: total=3610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.201 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3692292: Sun Nov 10 00:15:35 2024 00:42:09.201 read: IOPS=25, BW=100KiB/s (103kB/s)(324KiB/3234msec) 00:42:09.201 slat (nsec): min=7668, max=43926, avg=19894.44, stdev=8919.12 00:42:09.201 clat (usec): min=404, max=42510, avg=39606.45, stdev=7714.26 00:42:09.201 lat (usec): min=411, max=42525, avg=39626.42, stdev=7715.02 00:42:09.201 clat percentiles (usec): 00:42:09.201 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:42:09.201 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:09.201 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:42:09.201 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:09.201 | 99.99th=[42730] 00:42:09.201 bw ( KiB/s): min= 96, max= 112, per=0.43%, avg=100.00, stdev= 6.69, samples=6 00:42:09.201 iops : min= 24, max= 28, avg=25.00, stdev= 1.67, samples=6 00:42:09.201 lat (usec) : 500=1.22%, 750=2.44% 00:42:09.201 lat (msec) : 50=95.12% 00:42:09.201 cpu : usr=0.00%, sys=0.06%, ctx=84, majf=0, minf=1 00:42:09.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.201 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.201 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.201 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3692293: Sun Nov 10 00:15:35 2024 00:42:09.201 read: IOPS=2841, BW=11.1MiB/s (11.6MB/s)(32.5MiB/2924msec) 00:42:09.201 slat (nsec): min=4359, max=57855, avg=10316.64, stdev=5477.32 00:42:09.201 clat (usec): min=215, max=41253, avg=336.27, stdev=1178.07 00:42:09.201 lat (usec): min=224, max=41269, avg=346.59, stdev=1178.07 00:42:09.201 clat percentiles (usec): 00:42:09.201 | 1.00th=[ 239], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 269], 00:42:09.201 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:42:09.201 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 379], 95.00th=[ 437], 00:42:09.201 | 99.00th=[ 510], 99.50th=[ 594], 99.90th=[ 971], 99.95th=[40633], 00:42:09.201 | 99.99th=[41157] 00:42:09.201 bw ( KiB/s): min=10400, max=13456, per=50.07%, avg=11571.20, stdev=1255.91, samples=5 00:42:09.201 iops : min= 2600, max= 3364, avg=2892.80, stdev=313.98, samples=5 00:42:09.201 lat (usec) : 250=1.29%, 500=97.53%, 750=1.02%, 1000=0.05% 00:42:09.202 lat (msec) : 2=0.01%, 50=0.08% 00:42:09.202 cpu : usr=1.27%, sys=4.11%, ctx=8312, majf=0, minf=1 00:42:09.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:09.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.202 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:09.202 issued rwts: total=8310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:09.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:09.202 00:42:09.202 Run status group 0 (all jobs): 00:42:09.202 READ: bw=22.6MiB/s (23.7MB/s), 100KiB/s-11.3MiB/s (103kB/s-11.8MB/s), io=86.3MiB (90.4MB), run=2924-3822msec 00:42:09.202 00:42:09.202 Disk stats (read/write): 00:42:09.202 nvme0n1: ios=9722/0, merge=0/0, ticks=3189/0, in_queue=3189, util=96.20% 00:42:09.202 nvme0n2: ios=2954/0, merge=0/0, ticks=3492/0, in_queue=3492, util=94.99% 00:42:09.202 nvme0n3: ios=128/0, merge=0/0, ticks=4018/0, in_queue=4018, util=100.00% 00:42:09.202 nvme0n4: ios=8186/0, merge=0/0, ticks=2841/0, in_queue=2841, util=100.00% 00:42:09.202 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:09.202 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:09.459 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:09.459 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:10.025 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.025 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:10.283 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.284 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:10.542 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:10.542 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:10.799 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:10.799 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3692202 00:42:10.799 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:10.799 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:11.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:11.733 nvmf hotplug test: fio failed as expected 00:42:11.733 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:11.992 rmmod nvme_tcp 00:42:11.992 rmmod nvme_fabrics 00:42:11.992 rmmod nvme_keyring 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3690058 ']' 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3690058 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3690058 ']' 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3690058 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690058 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690058' 00:42:11.992 killing process with pid 3690058 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3690058 00:42:11.992 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3690058 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:13.366 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:15.269 00:42:15.269 real 0m26.937s 00:42:15.269 user 1m11.722s 00:42:15.269 sys 0m10.829s 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:15.269 ************************************ 00:42:15.269 END TEST nvmf_fio_target 00:42:15.269 ************************************ 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:15.269 ************************************ 00:42:15.269 START TEST nvmf_bdevio 00:42:15.269 ************************************ 00:42:15.269 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:15.528 * Looking for test storage... 00:42:15.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:15.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.528 --rc genhtml_branch_coverage=1 00:42:15.528 --rc genhtml_function_coverage=1 00:42:15.528 --rc genhtml_legend=1 00:42:15.528 --rc geninfo_all_blocks=1 00:42:15.528 --rc geninfo_unexecuted_blocks=1 00:42:15.528 00:42:15.528 ' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:15.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.528 --rc genhtml_branch_coverage=1 00:42:15.528 --rc genhtml_function_coverage=1 00:42:15.528 --rc genhtml_legend=1 00:42:15.528 --rc geninfo_all_blocks=1 00:42:15.528 --rc geninfo_unexecuted_blocks=1 00:42:15.528 00:42:15.528 ' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:15.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.528 --rc genhtml_branch_coverage=1 00:42:15.528 --rc genhtml_function_coverage=1 00:42:15.528 --rc genhtml_legend=1 00:42:15.528 --rc geninfo_all_blocks=1 00:42:15.528 --rc geninfo_unexecuted_blocks=1 00:42:15.528 00:42:15.528 ' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:15.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.528 --rc genhtml_branch_coverage=1 00:42:15.528 --rc genhtml_function_coverage=1 00:42:15.528 --rc genhtml_legend=1 00:42:15.528 --rc geninfo_all_blocks=1 00:42:15.528 --rc geninfo_unexecuted_blocks=1 00:42:15.528 00:42:15.528 ' 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:15.528 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:15.529 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.432 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:17.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:17.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:17.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:17.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.433 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.434 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:42:17.693 00:42:17.693 --- 10.0.0.2 ping statistics --- 00:42:17.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.693 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:42:17.693 00:42:17.693 --- 10.0.0.1 ping statistics --- 00:42:17.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.693 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3695180 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3695180 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3695180 ']' 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.693 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:17.694 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:17.953 [2024-11-10 00:15:43.962810] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:17.953 [2024-11-10 00:15:43.965838] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:42:17.953 [2024-11-10 00:15:43.965959] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.953 [2024-11-10 00:15:44.134404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:18.211 [2024-11-10 00:15:44.278441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:18.211 [2024-11-10 00:15:44.278504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:18.211 [2024-11-10 00:15:44.278534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:18.211 [2024-11-10 00:15:44.278556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:18.211 [2024-11-10 00:15:44.278577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:18.211 [2024-11-10 00:15:44.281401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:18.211 [2024-11-10 00:15:44.281480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:18.211 [2024-11-10 00:15:44.281549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:18.211 [2024-11-10 00:15:44.281559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:18.469 [2024-11-10 00:15:44.652445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:18.469 [2024-11-10 00:15:44.663937] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:18.469 [2024-11-10 00:15:44.664133] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:18.469 [2024-11-10 00:15:44.664983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:18.469 [2024-11-10 00:15:44.665339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.036 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.036 [2024-11-10 00:15:44.994633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.036 Malloc0 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:19.036 [2024-11-10 00:15:45.110886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:19.036 { 00:42:19.036 "params": { 00:42:19.036 "name": "Nvme$subsystem", 00:42:19.036 "trtype": "$TEST_TRANSPORT", 00:42:19.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.036 "adrfam": "ipv4", 00:42:19.036 "trsvcid": "$NVMF_PORT", 00:42:19.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.036 "hdgst": ${hdgst:-false}, 00:42:19.036 "ddgst": ${ddgst:-false} 00:42:19.036 }, 00:42:19.036 "method": "bdev_nvme_attach_controller" 00:42:19.036 } 00:42:19.036 EOF 00:42:19.036 )") 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:19.036 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:19.036 "params": { 00:42:19.036 "name": "Nvme1", 00:42:19.036 "trtype": "tcp", 00:42:19.036 "traddr": "10.0.0.2", 00:42:19.036 "adrfam": "ipv4", 00:42:19.036 "trsvcid": "4420", 00:42:19.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:19.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:19.036 "hdgst": false, 00:42:19.036 "ddgst": false 00:42:19.036 }, 00:42:19.036 "method": "bdev_nvme_attach_controller" 00:42:19.036 }' 00:42:19.036 [2024-11-10 00:15:45.196737] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:42:19.036 [2024-11-10 00:15:45.196879] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695336 ] 00:42:19.295 [2024-11-10 00:15:45.336151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:19.295 [2024-11-10 00:15:45.471629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:19.295 [2024-11-10 00:15:45.471672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.295 [2024-11-10 00:15:45.471675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:19.866 I/O targets: 00:42:19.866 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:19.866 00:42:19.866 00:42:19.866 CUnit - A unit testing framework for C - Version 2.1-3 00:42:19.866 http://cunit.sourceforge.net/ 00:42:19.866 00:42:19.866 00:42:19.866 Suite: bdevio tests on: Nvme1n1 00:42:20.126 Test: blockdev write read block ...passed 00:42:20.126 Test: blockdev write zeroes read block ...passed 00:42:20.126 Test: blockdev write zeroes read no split ...passed 00:42:20.126 Test: blockdev write zeroes read split ...passed 00:42:20.126 Test: blockdev write zeroes read split partial ...passed 00:42:20.126 Test: blockdev reset ...[2024-11-10 00:15:46.205381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:20.126 [2024-11-10 00:15:46.205565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:20.126 [2024-11-10 00:15:46.259474] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:20.126 passed 00:42:20.126 Test: blockdev write read 8 blocks ...passed 00:42:20.126 Test: blockdev write read size > 128k ...passed 00:42:20.126 Test: blockdev write read invalid size ...passed 00:42:20.384 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:20.384 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:20.384 Test: blockdev write read max offset ...passed 00:42:20.384 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:20.384 Test: blockdev writev readv 8 blocks ...passed 00:42:20.384 Test: blockdev writev readv 30 x 1block ...passed 00:42:20.384 Test: blockdev writev readv block ...passed 00:42:20.384 Test: blockdev writev readv size > 128k ...passed 00:42:20.384 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:20.384 Test: blockdev comparev and writev ...[2024-11-10 00:15:46.556017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.556073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.556112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.556140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.556668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.556703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.556762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.557307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.557341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.557374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.557400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.557952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.557986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:20.384 [2024-11-10 00:15:46.558019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:20.384 [2024-11-10 00:15:46.558044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:20.643 passed 00:42:20.643 Test: blockdev nvme passthru rw ...passed 00:42:20.643 Test: blockdev nvme passthru vendor specific ...[2024-11-10 00:15:46.639956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.643 [2024-11-10 00:15:46.639998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:20.643 [2024-11-10 00:15:46.640221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.643 [2024-11-10 00:15:46.640255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:20.643 [2024-11-10 00:15:46.640463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.643 [2024-11-10 00:15:46.640495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:20.643 [2024-11-10 00:15:46.640713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:20.643 [2024-11-10 00:15:46.640746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:20.643 passed 00:42:20.643 Test: blockdev nvme admin passthru ...passed 00:42:20.643 Test: blockdev copy ...passed 00:42:20.643 00:42:20.643 Run Summary: Type Total Ran Passed Failed Inactive 00:42:20.643 suites 1 1 n/a 0 0 00:42:20.643 tests 23 23 23 0 0 00:42:20.643 asserts 152 152 152 0 n/a 00:42:20.643 00:42:20.643 Elapsed time = 1.342 seconds 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:21.580 rmmod nvme_tcp 00:42:21.580 rmmod nvme_fabrics 00:42:21.580 rmmod nvme_keyring 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3695180 ']' 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3695180 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3695180 ']' 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3695180 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3695180 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3695180' 00:42:21.580 killing process with pid 3695180 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3695180 00:42:21.580 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3695180 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:22.957 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:24.866 00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:24.866 00:42:24.866 real 0m9.547s 00:42:24.866 user 0m17.703s 00:42:24.866 sys 0m3.048s 00:42:24.866 00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:24.866 00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:24.866 ************************************ 00:42:24.866 END TEST nvmf_bdevio 00:42:24.866 ************************************ 00:42:24.866 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:24.866 00:42:24.866 real 4m28.557s 00:42:24.866 user 9m46.986s 00:42:24.866 sys 1m27.920s 00:42:24.866 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:24.866 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:24.866 ************************************ 00:42:24.866 END TEST nvmf_target_core_interrupt_mode 00:42:24.866 ************************************ 00:42:24.866 00:15:51 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:24.866 00:15:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:24.866 00:15:51 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:24.866 00:15:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:25.126 ************************************ 00:42:25.126 START TEST nvmf_interrupt 00:42:25.126 ************************************ 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:25.126 * Looking for test storage... 00:42:25.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:25.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.126 --rc genhtml_branch_coverage=1 00:42:25.126 --rc genhtml_function_coverage=1 00:42:25.126 --rc genhtml_legend=1 00:42:25.126 --rc geninfo_all_blocks=1 00:42:25.126 --rc geninfo_unexecuted_blocks=1 00:42:25.126 00:42:25.126 ' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:25.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.126 --rc genhtml_branch_coverage=1 00:42:25.126 --rc genhtml_function_coverage=1 00:42:25.126 --rc genhtml_legend=1 00:42:25.126 --rc geninfo_all_blocks=1 00:42:25.126 --rc geninfo_unexecuted_blocks=1 00:42:25.126 00:42:25.126 ' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:25.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.126 --rc genhtml_branch_coverage=1 00:42:25.126 --rc genhtml_function_coverage=1 00:42:25.126 --rc genhtml_legend=1 00:42:25.126 --rc geninfo_all_blocks=1 00:42:25.126 --rc geninfo_unexecuted_blocks=1 00:42:25.126 00:42:25.126 ' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:25.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.126 --rc genhtml_branch_coverage=1 00:42:25.126 --rc genhtml_function_coverage=1 00:42:25.126 --rc genhtml_legend=1 00:42:25.126 --rc geninfo_all_blocks=1 00:42:25.126 --rc geninfo_unexecuted_blocks=1 00:42:25.126 00:42:25.126 ' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.126 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:25.127 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:25.127 00:15:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:25.127 00:15:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:27.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:27.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:27.029 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:27.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:27.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:27.030 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:27.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:27.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:42:27.288 00:42:27.288 --- 10.0.0.2 ping statistics --- 00:42:27.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:27.288 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:27.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:27.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:42:27.288 00:42:27.288 --- 10.0.0.1 ping statistics --- 00:42:27.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:27.288 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3697681 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3697681 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3697681 ']' 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:27.288 00:15:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:27.288 [2024-11-10 00:15:53.423127] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:27.288 [2024-11-10 00:15:53.425669] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:42:27.289 [2024-11-10 00:15:53.425773] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:27.546 [2024-11-10 00:15:53.577736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:27.546 [2024-11-10 00:15:53.715215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:27.546 [2024-11-10 00:15:53.715298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:27.546 [2024-11-10 00:15:53.715327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:27.546 [2024-11-10 00:15:53.715349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:27.546 [2024-11-10 00:15:53.715380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:27.546 [2024-11-10 00:15:53.717998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.546 [2024-11-10 00:15:53.718007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.112 [2024-11-10 00:15:54.089864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:28.112 [2024-11-10 00:15:54.090631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:28.112 [2024-11-10 00:15:54.090974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:28.370 5000+0 records in 00:42:28.370 5000+0 records out 00:42:28.370 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0146014 s, 701 MB/s 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.370 AIO0 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.370 [2024-11-10 00:15:54.456682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.370 [2024-11-10 00:15:54.483362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3697681 0 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3697681 0 idle 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:28.370 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:28.628 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697681 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.75 reactor_0' 00:42:28.628 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697681 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.75 reactor_0 00:42:28.628 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.628 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.628 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:28.628 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3697681 1 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3697681 1 idle 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697692 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.00 reactor_1' 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697692 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.00 reactor_1 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3697858 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3697681 0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3697681 0 busy 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:28.629 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697681 root 20 0 20.1t 202752 102528 R 53.3 0.3 0:00.84 reactor_0' 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697681 root 20 0 20.1t 202752 102528 R 53.3 0.3 0:00.84 reactor_0 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=53.3 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=53 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3697681 1 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3697681 1 busy 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:28.887 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.888 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.888 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.888 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:28.888 00:15:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697692 root 20 0 20.1t 207744 102528 R 93.3 0.3 0:00.19 reactor_1' 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697692 root 20 0 20.1t 207744 102528 R 93.3 0.3 0:00.19 reactor_1 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:29.146 00:15:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3697858 00:42:39.113 Initializing NVMe Controllers 00:42:39.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:39.113 Controller IO queue size 256, less than required. 00:42:39.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:39.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:39.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:39.113 Initialization complete. Launching workers. 00:42:39.113 ======================================================== 00:42:39.113 Latency(us) 00:42:39.113 Device Information : IOPS MiB/s Average min max 00:42:39.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10689.20 41.75 23969.65 6702.26 28814.31 00:42:39.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10870.40 42.46 23570.11 6756.84 29562.20 00:42:39.113 ======================================================== 00:42:39.113 Total : 21559.60 84.22 23768.20 6702.26 29562.20 00:42:39.113 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3697681 0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3697681 0 idle 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697681 root 20 0 20.1t 210432 102528 S 0.0 0.3 0:20.65 reactor_0' 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697681 root 20 0 20.1t 210432 102528 S 0.0 0.3 0:20.65 reactor_0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3697681 1 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3697681 1 idle 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:39.113 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697692 root 20 0 20.1t 210432 102528 S 0.0 0.3 0:09.91 reactor_1' 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697692 root 20 0 20.1t 210432 102528 S 0.0 0.3 0:09.91 reactor_1 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:39.373 00:16:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:39.635 00:16:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:39.635 00:16:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:42:39.635 00:16:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:42:39.635 00:16:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:42:39.635 00:16:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3697681 0 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3697681 0 idle 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:41.600 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697681 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:20.83 reactor_0' 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697681 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:20.83 reactor_0 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3697681 1 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3697681 1 idle 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3697681 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3697681 -w 256 00:42:41.858 00:16:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:41.858 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3697692 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:09.98 reactor_1' 00:42:41.858 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3697692 root 20 0 20.1t 237696 111744 S 0.0 0.4 0:09.98 reactor_1 00:42:41.858 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:41.859 00:16:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:42.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:42.425 rmmod nvme_tcp 00:42:42.425 rmmod nvme_fabrics 00:42:42.425 rmmod nvme_keyring 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3697681 ']' 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3697681 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3697681 ']' 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3697681 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3697681 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3697681' 00:42:42.425 killing process with pid 3697681 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3697681 00:42:42.425 00:16:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3697681 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:43.798 00:16:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:45.706 00:16:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:45.706 00:42:45.706 real 0m20.603s 00:42:45.706 user 0m39.337s 00:42:45.706 sys 0m6.429s 00:42:45.707 00:16:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:45.707 00:16:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:45.707 ************************************ 00:42:45.707 END TEST nvmf_interrupt 00:42:45.707 ************************************ 00:42:45.707 00:42:45.707 real 35m36.912s 00:42:45.707 user 93m19.464s 00:42:45.707 sys 7m53.091s 00:42:45.707 00:16:11 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:45.707 00:16:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.707 ************************************ 00:42:45.707 END TEST nvmf_tcp 00:42:45.707 ************************************ 00:42:45.707 00:16:11 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:42:45.707 00:16:11 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:45.707 00:16:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:45.707 00:16:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:45.707 00:16:11 -- common/autotest_common.sh@10 -- # set +x 00:42:45.707 ************************************ 00:42:45.707 START TEST spdkcli_nvmf_tcp 00:42:45.707 ************************************ 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:45.707 * Looking for test storage... 00:42:45.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:45.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.707 --rc genhtml_branch_coverage=1 00:42:45.707 --rc genhtml_function_coverage=1 00:42:45.707 --rc genhtml_legend=1 00:42:45.707 --rc geninfo_all_blocks=1 00:42:45.707 --rc geninfo_unexecuted_blocks=1 00:42:45.707 00:42:45.707 ' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:45.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.707 --rc genhtml_branch_coverage=1 00:42:45.707 --rc genhtml_function_coverage=1 00:42:45.707 --rc genhtml_legend=1 00:42:45.707 --rc geninfo_all_blocks=1 00:42:45.707 --rc geninfo_unexecuted_blocks=1 00:42:45.707 00:42:45.707 ' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:45.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.707 --rc genhtml_branch_coverage=1 00:42:45.707 --rc genhtml_function_coverage=1 00:42:45.707 --rc genhtml_legend=1 00:42:45.707 --rc geninfo_all_blocks=1 00:42:45.707 --rc geninfo_unexecuted_blocks=1 00:42:45.707 00:42:45.707 ' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:45.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.707 --rc genhtml_branch_coverage=1 00:42:45.707 --rc genhtml_function_coverage=1 00:42:45.707 --rc genhtml_legend=1 00:42:45.707 --rc geninfo_all_blocks=1 00:42:45.707 --rc geninfo_unexecuted_blocks=1 00:42:45.707 00:42:45.707 ' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.707 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:45.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.708 00:16:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3699991 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3699991 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3699991 ']' 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:45.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:45.966 00:16:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.966 [2024-11-10 00:16:11.998599] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:42:45.966 [2024-11-10 00:16:11.998765] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699991 ] 00:42:45.966 [2024-11-10 00:16:12.132173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:46.224 [2024-11-10 00:16:12.258164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.224 [2024-11-10 00:16:12.258164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.798 00:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:46.798 00:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:42:46.798 00:16:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:46.798 00:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:46.798 00:16:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.056 00:16:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:47.056 00:16:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:47.056 00:16:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:47.056 00:16:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:47.056 00:16:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.056 00:16:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:47.056 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:47.056 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:47.056 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:47.056 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:47.056 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:47.056 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:47.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:47.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:47.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:47.056 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:47.056 ' 00:42:50.338 [2024-11-10 00:16:15.791434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:50.903 [2024-11-10 00:16:17.065396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:53.435 [2024-11-10 00:16:19.413091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:55.333 [2024-11-10 00:16:21.467901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:57.231 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:57.231 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:57.231 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:57.231 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:57.231 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:57.231 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:57.231 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:57.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:57.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:57.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:57.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:57.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:57.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:57.232 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:57.232 00:16:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.489 00:16:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:57.489 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:57.489 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:57.489 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:57.489 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:57.489 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:57.489 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:57.489 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:57.489 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:57.489 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:57.489 ' 00:43:04.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:04.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:04.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:04.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:04.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:04.044 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:04.044 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:04.044 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:04.044 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:04.044 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:04.044 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:04.044 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:04.044 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:04.044 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3699991 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3699991 ']' 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3699991 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3699991 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3699991' 00:43:04.044 killing process with pid 3699991 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3699991 00:43:04.044 00:16:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3699991 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3699991 ']' 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3699991 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3699991 ']' 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3699991 00:43:04.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3699991) - No such process 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3699991 is not found' 00:43:04.610 Process with pid 3699991 is not found 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:04.610 00:43:04.610 real 0m19.060s 00:43:04.610 user 0m39.905s 00:43:04.610 sys 0m1.043s 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:04.610 00:16:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:04.610 ************************************ 00:43:04.610 END TEST spdkcli_nvmf_tcp 00:43:04.610 ************************************ 00:43:04.869 00:16:30 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:04.869 00:16:30 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:43:04.869 00:16:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:04.869 00:16:30 -- common/autotest_common.sh@10 -- # set +x 00:43:04.869 ************************************ 00:43:04.869 START TEST nvmf_identify_passthru 00:43:04.869 ************************************ 00:43:04.869 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:04.869 * Looking for test storage... 00:43:04.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:04.869 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:04.869 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:43:04.869 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:04.869 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:04.869 00:16:30 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:04.869 00:16:30 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:04.869 00:16:30 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:04.869 00:16:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:04.870 00:16:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:04.870 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:04.870 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.870 --rc genhtml_branch_coverage=1 00:43:04.870 --rc genhtml_function_coverage=1 00:43:04.870 --rc genhtml_legend=1 00:43:04.870 --rc geninfo_all_blocks=1 00:43:04.870 --rc geninfo_unexecuted_blocks=1 00:43:04.870 00:43:04.870 ' 00:43:04.870 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.870 --rc genhtml_branch_coverage=1 00:43:04.870 --rc genhtml_function_coverage=1 00:43:04.870 --rc genhtml_legend=1 00:43:04.870 --rc geninfo_all_blocks=1 00:43:04.870 --rc geninfo_unexecuted_blocks=1 00:43:04.870 00:43:04.870 ' 00:43:04.870 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.870 --rc genhtml_branch_coverage=1 00:43:04.870 --rc genhtml_function_coverage=1 00:43:04.870 --rc genhtml_legend=1 00:43:04.870 --rc geninfo_all_blocks=1 00:43:04.870 --rc geninfo_unexecuted_blocks=1 00:43:04.870 00:43:04.870 ' 00:43:04.870 00:16:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:04.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.870 --rc genhtml_branch_coverage=1 00:43:04.870 --rc genhtml_function_coverage=1 00:43:04.870 --rc genhtml_legend=1 00:43:04.870 --rc geninfo_all_blocks=1 00:43:04.870 --rc geninfo_unexecuted_blocks=1 00:43:04.870 00:43:04.870 ' 00:43:04.870 00:16:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:04.870 00:16:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:04.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:04.870 00:16:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:04.870 00:16:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:04.870 00:16:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.870 00:16:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:04.870 00:16:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:04.870 00:16:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:04.870 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:04.871 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:04.871 00:16:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:04.871 00:16:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:06.771 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:07.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:07.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:07.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:07.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:07.031 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:07.032 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:07.032 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:07.032 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:07.032 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:07.032 00:16:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:07.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:07.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:43:07.032 00:43:07.032 --- 10.0.0.2 ping statistics --- 00:43:07.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.032 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:07.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:07.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:43:07.032 00:43:07.032 --- 10.0.0.1 ping statistics --- 00:43:07.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:07.032 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:07.032 00:16:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:43:07.032 00:16:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:07.032 00:16:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:12.303 00:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:12.303 00:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:12.303 00:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:12.303 00:16:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:16.535 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:16.536 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.536 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.536 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3704893 00:43:16.536 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:16.536 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:16.536 00:16:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3704893 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3704893 ']' 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:16.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:16.536 00:16:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.536 [2024-11-10 00:16:42.093366] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:43:16.536 [2024-11-10 00:16:42.093516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:16.536 [2024-11-10 00:16:42.243604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:16.536 [2024-11-10 00:16:42.388667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:16.536 [2024-11-10 00:16:42.388744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:16.536 [2024-11-10 00:16:42.388779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:16.536 [2024-11-10 00:16:42.388802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:16.536 [2024-11-10 00:16:42.388822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:16.536 [2024-11-10 00:16:42.391685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:16.536 [2024-11-10 00:16:42.391742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:16.536 [2024-11-10 00:16:42.391801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.536 [2024-11-10 00:16:42.391808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:43:17.101 00:16:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.101 INFO: Log level set to 20 00:43:17.101 INFO: Requests: 00:43:17.101 { 00:43:17.101 "jsonrpc": "2.0", 00:43:17.101 "method": "nvmf_set_config", 00:43:17.101 "id": 1, 00:43:17.101 "params": { 00:43:17.101 "admin_cmd_passthru": { 00:43:17.101 "identify_ctrlr": true 00:43:17.101 } 00:43:17.101 } 00:43:17.101 } 00:43:17.101 00:43:17.101 INFO: response: 00:43:17.101 { 00:43:17.101 "jsonrpc": "2.0", 00:43:17.101 "id": 1, 00:43:17.101 "result": true 00:43:17.101 } 00:43:17.101 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:17.101 00:16:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:17.101 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.101 INFO: Setting log level to 20 00:43:17.101 INFO: Setting log level to 20 00:43:17.101 INFO: Log level set to 20 00:43:17.101 INFO: Log level set to 20 00:43:17.101 INFO: Requests: 00:43:17.101 { 00:43:17.101 "jsonrpc": "2.0", 00:43:17.101 "method": "framework_start_init", 00:43:17.101 "id": 1 00:43:17.101 } 00:43:17.101 00:43:17.101 INFO: Requests: 00:43:17.101 { 00:43:17.101 "jsonrpc": "2.0", 00:43:17.101 "method": "framework_start_init", 00:43:17.101 "id": 1 00:43:17.101 } 00:43:17.101 00:43:17.359 [2024-11-10 00:16:43.395146] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:17.359 INFO: response: 00:43:17.359 { 00:43:17.359 "jsonrpc": "2.0", 00:43:17.359 "id": 1, 00:43:17.359 "result": true 00:43:17.359 } 00:43:17.359 00:43:17.359 INFO: response: 00:43:17.359 { 00:43:17.359 "jsonrpc": "2.0", 00:43:17.359 "id": 1, 00:43:17.359 "result": true 00:43:17.359 } 00:43:17.359 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:17.359 00:16:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.359 INFO: Setting log level to 40 00:43:17.359 INFO: Setting log level to 40 00:43:17.359 INFO: Setting log level to 40 00:43:17.359 [2024-11-10 00:16:43.408090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:17.359 00:16:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:17.359 00:16:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:17.359 00:16:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:20.639 Nvme0n1 00:43:20.639 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.639 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:20.639 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.639 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:20.639 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:20.640 [2024-11-10 00:16:46.366249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:20.640 [ 00:43:20.640 { 00:43:20.640 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:20.640 "subtype": "Discovery", 00:43:20.640 "listen_addresses": [], 00:43:20.640 "allow_any_host": true, 00:43:20.640 "hosts": [] 00:43:20.640 }, 00:43:20.640 { 00:43:20.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:20.640 "subtype": "NVMe", 00:43:20.640 "listen_addresses": [ 00:43:20.640 { 00:43:20.640 "trtype": "TCP", 00:43:20.640 "adrfam": "IPv4", 00:43:20.640 "traddr": "10.0.0.2", 00:43:20.640 "trsvcid": "4420" 00:43:20.640 } 00:43:20.640 ], 00:43:20.640 "allow_any_host": true, 00:43:20.640 "hosts": [], 00:43:20.640 "serial_number": "SPDK00000000000001", 00:43:20.640 "model_number": "SPDK bdev Controller", 00:43:20.640 "max_namespaces": 1, 00:43:20.640 "min_cntlid": 1, 00:43:20.640 "max_cntlid": 65519, 00:43:20.640 "namespaces": [ 00:43:20.640 { 00:43:20.640 "nsid": 1, 00:43:20.640 "bdev_name": "Nvme0n1", 00:43:20.640 "name": "Nvme0n1", 00:43:20.640 "nguid": "4D8B393B89CA4E399B1018B4CA718755", 00:43:20.640 "uuid": "4d8b393b-89ca-4e39-9b10-18b4ca718755" 00:43:20.640 } 00:43:20.640 ] 00:43:20.640 } 00:43:20.640 ] 00:43:20.640 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:20.640 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:20.899 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:20.899 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:20.899 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:20.899 00:16:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:20.899 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:20.899 00:16:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:20.899 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:20.899 00:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:20.899 00:16:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:20.899 rmmod nvme_tcp 00:43:20.899 rmmod nvme_fabrics 00:43:20.899 rmmod nvme_keyring 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3704893 ']' 00:43:20.899 00:16:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3704893 00:43:20.899 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3704893 ']' 00:43:20.899 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3704893 00:43:20.899 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:43:20.899 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:20.899 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3704893 00:43:21.157 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:21.157 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:21.157 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3704893' 00:43:21.157 killing process with pid 3704893 00:43:21.157 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3704893 00:43:21.157 00:16:47 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3704893 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:23.685 00:16:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:23.685 00:16:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:23.685 00:16:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:25.590 00:16:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:25.590 00:43:25.590 real 0m20.763s 00:43:25.590 user 0m33.711s 00:43:25.590 sys 0m3.569s 00:43:25.590 00:16:51 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:25.590 00:16:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:25.590 ************************************ 00:43:25.590 END TEST nvmf_identify_passthru 00:43:25.590 ************************************ 00:43:25.590 00:16:51 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:25.590 00:16:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:25.590 00:16:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:25.590 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:43:25.590 ************************************ 00:43:25.590 START TEST nvmf_dif 00:43:25.590 ************************************ 00:43:25.590 00:16:51 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:25.590 * Looking for test storage... 00:43:25.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:25.590 00:16:51 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:25.590 00:16:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:43:25.590 00:16:51 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:25.853 00:16:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:25.853 00:16:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:25.853 00:16:51 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:25.853 00:16:51 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:25.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.853 --rc genhtml_branch_coverage=1 00:43:25.854 --rc genhtml_function_coverage=1 00:43:25.854 --rc genhtml_legend=1 00:43:25.854 --rc geninfo_all_blocks=1 00:43:25.854 --rc geninfo_unexecuted_blocks=1 00:43:25.854 00:43:25.854 ' 00:43:25.854 00:16:51 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.854 --rc genhtml_branch_coverage=1 00:43:25.854 --rc genhtml_function_coverage=1 00:43:25.854 --rc genhtml_legend=1 00:43:25.854 --rc geninfo_all_blocks=1 00:43:25.854 --rc geninfo_unexecuted_blocks=1 00:43:25.854 00:43:25.854 ' 00:43:25.854 00:16:51 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.854 --rc genhtml_branch_coverage=1 00:43:25.854 --rc genhtml_function_coverage=1 00:43:25.854 --rc genhtml_legend=1 00:43:25.854 --rc geninfo_all_blocks=1 00:43:25.854 --rc geninfo_unexecuted_blocks=1 00:43:25.854 00:43:25.854 ' 00:43:25.854 00:16:51 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:25.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.854 --rc genhtml_branch_coverage=1 00:43:25.854 --rc genhtml_function_coverage=1 00:43:25.854 --rc genhtml_legend=1 00:43:25.854 --rc geninfo_all_blocks=1 00:43:25.854 --rc geninfo_unexecuted_blocks=1 00:43:25.854 00:43:25.854 ' 00:43:25.854 00:16:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:25.854 00:16:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:25.854 00:16:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:25.854 00:16:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:25.854 00:16:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:25.854 00:16:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.854 00:16:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.854 00:16:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.854 00:16:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:25.854 00:16:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:25.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:25.854 00:16:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:25.854 00:16:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:25.854 00:16:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:25.854 00:16:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:25.854 00:16:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:25.854 00:16:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:25.854 00:16:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:25.854 00:16:51 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:25.854 00:16:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:27.756 00:16:53 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:27.757 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:27.757 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:27.757 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:27.757 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:27.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:27.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:43:27.757 00:43:27.757 --- 10.0.0.2 ping statistics --- 00:43:27.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.757 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:27.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:27.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:43:27.757 00:43:27.757 --- 10.0.0.1 ping statistics --- 00:43:27.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.757 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:27.757 00:16:53 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:28.697 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:28.697 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:28.697 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:28.697 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:28.697 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:28.697 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:28.697 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:28.697 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:28.697 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:28.697 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:28.697 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:28.697 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:28.697 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:28.697 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:28.957 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:28.957 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:28.957 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:28.957 00:16:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:28.957 00:16:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3708407 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:28.957 00:16:55 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3708407 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3708407 ']' 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:28.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:28.957 00:16:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:29.215 [2024-11-10 00:16:55.196934] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:43:29.215 [2024-11-10 00:16:55.197105] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:29.215 [2024-11-10 00:16:55.348232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.474 [2024-11-10 00:16:55.487378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:29.474 [2024-11-10 00:16:55.487470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:29.474 [2024-11-10 00:16:55.487495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:29.474 [2024-11-10 00:16:55.487520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:29.474 [2024-11-10 00:16:55.487540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:29.474 [2024-11-10 00:16:55.489209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:43:30.042 00:16:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 00:16:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:30.042 00:16:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:30.042 00:16:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 [2024-11-10 00:16:56.169198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.042 00:16:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 ************************************ 00:43:30.042 START TEST fio_dif_1_default 00:43:30.042 ************************************ 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 bdev_null0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:30.042 [2024-11-10 00:16:56.225501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:30.042 { 00:43:30.042 "params": { 00:43:30.042 "name": "Nvme$subsystem", 00:43:30.042 "trtype": "$TEST_TRANSPORT", 00:43:30.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:30.042 "adrfam": "ipv4", 00:43:30.042 "trsvcid": "$NVMF_PORT", 00:43:30.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:30.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:30.042 "hdgst": ${hdgst:-false}, 00:43:30.042 "ddgst": ${ddgst:-false} 00:43:30.042 }, 00:43:30.042 "method": "bdev_nvme_attach_controller" 00:43:30.042 } 00:43:30.042 EOF 00:43:30.042 )") 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:30.042 00:16:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:30.042 "params": { 00:43:30.042 "name": "Nvme0", 00:43:30.042 "trtype": "tcp", 00:43:30.042 "traddr": "10.0.0.2", 00:43:30.042 "adrfam": "ipv4", 00:43:30.042 "trsvcid": "4420", 00:43:30.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:30.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:30.042 "hdgst": false, 00:43:30.042 "ddgst": false 00:43:30.042 }, 00:43:30.042 "method": "bdev_nvme_attach_controller" 00:43:30.042 }' 00:43:30.301 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:30.301 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:30.301 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # break 00:43:30.301 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:30.301 00:16:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.559 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:30.559 fio-3.35 00:43:30.559 Starting 1 thread 00:43:42.777 00:43:42.778 filename0: (groupid=0, jobs=1): err= 0: pid=3708759: Sun Nov 10 00:17:07 2024 00:43:42.778 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:43:42.778 slat (nsec): min=5447, max=80466, avg=16381.18, stdev=6369.73 00:43:42.778 clat (usec): min=40844, max=43866, avg=40967.87, stdev=194.74 00:43:42.778 lat (usec): min=40855, max=43889, avg=40984.25, stdev=194.89 00:43:42.778 clat percentiles (usec): 00:43:42.778 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:42.778 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:42.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:42.778 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:43:42.778 | 99.99th=[43779] 00:43:42.778 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:43:42.778 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:42.778 lat (msec) : 50=100.00% 00:43:42.778 cpu : usr=92.23%, sys=7.26%, ctx=14, majf=0, minf=1636 00:43:42.778 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.778 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.778 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:42.778 00:43:42.778 Run status group 0 (all jobs): 00:43:42.778 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:43:42.778 ----------------------------------------------------- 00:43:42.778 Suppressions used: 00:43:42.778 count bytes template 00:43:42.778 1 8 /usr/src/fio/parse.c 00:43:42.778 1 8 libtcmalloc_minimal.so 00:43:42.778 1 904 libcrypto.so 00:43:42.778 ----------------------------------------------------- 00:43:42.778 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:43:42.778 real 0m12.419s 00:43:42.778 user 0m11.408s 00:43:42.778 sys 0m1.191s 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 ************************************ 00:43:42.778 END TEST fio_dif_1_default 00:43:42.778 ************************************ 00:43:42.778 00:17:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:42.778 00:17:08 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:42.778 00:17:08 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 ************************************ 00:43:42.778 START TEST fio_dif_1_multi_subsystems 00:43:42.778 ************************************ 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 bdev_null0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 [2024-11-10 00:17:08.689257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 bdev_null1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:42.778 { 00:43:42.778 "params": { 00:43:42.778 "name": "Nvme$subsystem", 00:43:42.778 "trtype": "$TEST_TRANSPORT", 00:43:42.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:42.778 "adrfam": "ipv4", 00:43:42.778 "trsvcid": "$NVMF_PORT", 00:43:42.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:42.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:42.778 "hdgst": ${hdgst:-false}, 00:43:42.778 "ddgst": ${ddgst:-false} 00:43:42.778 }, 00:43:42.778 "method": "bdev_nvme_attach_controller" 00:43:42.778 } 00:43:42.778 EOF 00:43:42.778 )") 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:42.778 { 00:43:42.778 "params": { 00:43:42.778 "name": "Nvme$subsystem", 00:43:42.778 "trtype": "$TEST_TRANSPORT", 00:43:42.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:42.778 "adrfam": "ipv4", 00:43:42.778 "trsvcid": "$NVMF_PORT", 00:43:42.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:42.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:42.778 "hdgst": ${hdgst:-false}, 00:43:42.778 "ddgst": ${ddgst:-false} 00:43:42.778 }, 00:43:42.778 "method": "bdev_nvme_attach_controller" 00:43:42.778 } 00:43:42.778 EOF 00:43:42.778 )") 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:42.778 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:42.778 "params": { 00:43:42.778 "name": "Nvme0", 00:43:42.778 "trtype": "tcp", 00:43:42.778 "traddr": "10.0.0.2", 00:43:42.778 "adrfam": "ipv4", 00:43:42.778 "trsvcid": "4420", 00:43:42.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:42.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:42.779 "hdgst": false, 00:43:42.779 "ddgst": false 00:43:42.779 }, 00:43:42.779 "method": "bdev_nvme_attach_controller" 00:43:42.779 },{ 00:43:42.779 "params": { 00:43:42.779 "name": "Nvme1", 00:43:42.779 "trtype": "tcp", 00:43:42.779 "traddr": "10.0.0.2", 00:43:42.779 "adrfam": "ipv4", 00:43:42.779 "trsvcid": "4420", 00:43:42.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:42.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:42.779 "hdgst": false, 00:43:42.779 "ddgst": false 00:43:42.779 }, 00:43:42.779 "method": "bdev_nvme_attach_controller" 00:43:42.779 }' 00:43:42.779 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:42.779 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:42.779 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # break 00:43:42.779 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:42.779 00:17:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.037 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:43.037 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:43.037 fio-3.35 00:43:43.037 Starting 2 threads 00:43:55.232 00:43:55.232 filename0: (groupid=0, jobs=1): err= 0: pid=3710278: Sun Nov 10 00:17:20 2024 00:43:55.232 read: IOPS=192, BW=768KiB/s (786kB/s)(7696KiB/10020msec) 00:43:55.232 slat (nsec): min=5276, max=47754, avg=14391.61, stdev=4905.27 00:43:55.232 clat (usec): min=653, max=42782, avg=20787.28, stdev=20413.81 00:43:55.232 lat (usec): min=664, max=42801, avg=20801.67, stdev=20413.93 00:43:55.232 clat percentiles (usec): 00:43:55.232 | 1.00th=[ 676], 5.00th=[ 693], 10.00th=[ 701], 20.00th=[ 717], 00:43:55.232 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 1172], 60.00th=[41157], 00:43:55.232 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:55.232 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:43:55.232 | 99.99th=[42730] 00:43:55.232 bw ( KiB/s): min= 704, max= 960, per=66.43%, avg=768.00, stdev=58.73, samples=20 00:43:55.232 iops : min= 176, max= 240, avg=192.00, stdev=14.68, samples=20 00:43:55.232 lat (usec) : 750=39.97%, 1000=9.30% 00:43:55.232 lat (msec) : 2=1.46%, 10=0.21%, 50=49.06% 00:43:55.232 cpu : usr=94.24%, sys=4.91%, ctx=40, majf=0, minf=1636 00:43:55.232 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:55.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.232 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.232 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:55.232 filename1: (groupid=0, jobs=1): err= 0: pid=3710279: Sun Nov 10 00:17:20 2024 00:43:55.232 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10034msec) 00:43:55.232 slat (nsec): min=8076, max=53421, avg=15420.49, stdev=5252.32 00:43:55.232 clat (usec): min=40826, max=43994, avg=41075.80, stdev=370.15 00:43:55.232 lat (usec): min=40845, max=44014, avg=41091.22, stdev=370.36 00:43:55.232 clat percentiles (usec): 00:43:55.232 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:55.232 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:55.232 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:43:55.232 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:43:55.232 | 99.99th=[43779] 00:43:55.232 bw ( KiB/s): min= 384, max= 416, per=33.56%, avg=388.80, stdev=11.72, samples=20 00:43:55.232 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:55.232 lat (msec) : 50=100.00% 00:43:55.232 cpu : usr=94.39%, sys=5.10%, ctx=30, majf=0, minf=1634 00:43:55.232 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:55.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.232 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.232 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:55.232 00:43:55.232 Run status group 0 (all jobs): 00:43:55.232 READ: bw=1156KiB/s (1184kB/s), 389KiB/s-768KiB/s (398kB/s-786kB/s), io=11.3MiB (11.9MB), run=10020-10034msec 00:43:55.232 ----------------------------------------------------- 00:43:55.232 Suppressions used: 00:43:55.232 count bytes template 00:43:55.232 2 16 /usr/src/fio/parse.c 00:43:55.232 1 8 libtcmalloc_minimal.so 00:43:55.232 1 904 libcrypto.so 00:43:55.232 ----------------------------------------------------- 00:43:55.232 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.232 00:43:55.232 real 0m12.549s 00:43:55.232 user 0m21.206s 00:43:55.232 sys 0m1.501s 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:55.232 00:17:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:55.232 ************************************ 00:43:55.232 END TEST fio_dif_1_multi_subsystems 00:43:55.232 ************************************ 00:43:55.232 00:17:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:55.232 00:17:21 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:55.232 00:17:21 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:55.232 00:17:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:55.232 ************************************ 00:43:55.232 START TEST fio_dif_rand_params 00:43:55.232 ************************************ 00:43:55.232 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.233 bdev_null0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.233 [2024-11-10 00:17:21.293500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:55.233 { 00:43:55.233 "params": { 00:43:55.233 "name": "Nvme$subsystem", 00:43:55.233 "trtype": "$TEST_TRANSPORT", 00:43:55.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:55.233 "adrfam": "ipv4", 00:43:55.233 "trsvcid": "$NVMF_PORT", 00:43:55.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:55.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:55.233 "hdgst": ${hdgst:-false}, 00:43:55.233 "ddgst": ${ddgst:-false} 00:43:55.233 }, 00:43:55.233 "method": "bdev_nvme_attach_controller" 00:43:55.233 } 00:43:55.233 EOF 00:43:55.233 )") 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:55.233 "params": { 00:43:55.233 "name": "Nvme0", 00:43:55.233 "trtype": "tcp", 00:43:55.233 "traddr": "10.0.0.2", 00:43:55.233 "adrfam": "ipv4", 00:43:55.233 "trsvcid": "4420", 00:43:55.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:55.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:55.233 "hdgst": false, 00:43:55.233 "ddgst": false 00:43:55.233 }, 00:43:55.233 "method": "bdev_nvme_attach_controller" 00:43:55.233 }' 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:55.233 00:17:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:55.491 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:55.491 ... 00:43:55.491 fio-3.35 00:43:55.491 Starting 3 threads 00:44:02.061 00:44:02.061 filename0: (groupid=0, jobs=1): err= 0: pid=3711801: Sun Nov 10 00:17:27 2024 00:44:02.061 read: IOPS=192, BW=24.1MiB/s (25.3MB/s)(122MiB/5045msec) 00:44:02.061 slat (nsec): min=7320, max=70368, avg=25906.06, stdev=5934.71 00:44:02.061 clat (usec): min=8075, max=89129, avg=15481.42, stdev=4765.51 00:44:02.061 lat (usec): min=8100, max=89155, avg=15507.33, stdev=4765.63 00:44:02.061 clat percentiles (usec): 00:44:02.061 | 1.00th=[ 9765], 5.00th=[11863], 10.00th=[12518], 20.00th=[13304], 00:44:02.061 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15008], 60.00th=[15533], 00:44:02.061 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:44:02.061 | 99.00th=[45876], 99.50th=[53216], 99.90th=[89654], 99.95th=[89654], 00:44:02.061 | 99.99th=[89654] 00:44:02.061 bw ( KiB/s): min=20992, max=27392, per=34.62%, avg=24837.00, stdev=1894.35, samples=10 00:44:02.061 iops : min= 164, max= 214, avg=194.00, stdev=14.79, samples=10 00:44:02.061 lat (msec) : 10=1.03%, 20=96.61%, 50=1.54%, 100=0.82% 00:44:02.061 cpu : usr=94.94%, sys=4.14%, ctx=36, majf=0, minf=1634 00:44:02.061 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:02.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:02.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:02.061 issued rwts: total=973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:02.061 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:02.061 filename0: (groupid=0, jobs=1): err= 0: pid=3711802: Sun Nov 10 00:17:27 2024 00:44:02.061 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(114MiB/5046msec) 00:44:02.061 slat (nsec): min=9438, max=55182, avg=21649.07, stdev=6055.73 00:44:02.061 clat (usec): min=5400, max=56928, avg=16542.94, stdev=5382.88 00:44:02.061 lat (usec): min=5418, max=56953, avg=16564.59, stdev=5382.72 00:44:02.061 clat percentiles (usec): 00:44:02.061 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13304], 20.00th=[14091], 00:44:02.061 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15926], 60.00th=[16581], 00:44:02.061 | 70.00th=[17171], 80.00th=[17957], 90.00th=[19006], 95.00th=[19792], 00:44:02.061 | 99.00th=[53216], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:44:02.061 | 99.99th=[56886] 00:44:02.061 bw ( KiB/s): min=19968, max=26112, per=32.44%, avg=23270.40, stdev=1494.92, samples=10 00:44:02.061 iops : min= 156, max= 204, avg=181.80, stdev=11.68, samples=10 00:44:02.061 lat (msec) : 10=1.32%, 20=94.51%, 50=2.96%, 100=1.21% 00:44:02.061 cpu : usr=93.95%, sys=5.51%, ctx=12, majf=0, minf=1634 00:44:02.061 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:02.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:02.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:02.061 issued rwts: total=911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:02.061 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:02.061 filename0: (groupid=0, jobs=1): err= 0: pid=3711803: Sun Nov 10 00:17:27 2024 00:44:02.061 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(118MiB/5007msec) 00:44:02.061 slat (nsec): min=7672, max=47971, avg=21113.67, stdev=5567.69 00:44:02.061 clat (usec): min=8075, max=53753, avg=15881.79, stdev=3706.89 00:44:02.061 lat (usec): min=8092, max=53770, avg=15902.91, stdev=3706.68 00:44:02.061 clat percentiles (usec): 00:44:02.061 | 1.00th=[ 9241], 5.00th=[11994], 10.00th=[12780], 20.00th=[13698], 00:44:02.061 | 30.00th=[14353], 40.00th=[15008], 50.00th=[15795], 60.00th=[16319], 00:44:02.061 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19006], 95.00th=[19792], 00:44:02.061 | 99.00th=[21627], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:44:02.061 | 99.99th=[53740] 00:44:02.061 bw ( KiB/s): min=22272, max=26624, per=33.61%, avg=24110.50, stdev=1393.85, samples=10 00:44:02.061 iops : min= 174, max= 208, avg=188.30, stdev=10.93, samples=10 00:44:02.061 lat (msec) : 10=2.22%, 20=93.86%, 50=3.60%, 100=0.32% 00:44:02.061 cpu : usr=94.25%, sys=5.17%, ctx=6, majf=0, minf=1636 00:44:02.061 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:02.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:02.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:02.061 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:02.061 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:02.061 00:44:02.061 Run status group 0 (all jobs): 00:44:02.061 READ: bw=70.1MiB/s (73.5MB/s), 22.6MiB/s-24.1MiB/s (23.7MB/s-25.3MB/s), io=354MiB (371MB), run=5007-5046msec 00:44:02.627 ----------------------------------------------------- 00:44:02.627 Suppressions used: 00:44:02.627 count bytes template 00:44:02.627 5 44 /usr/src/fio/parse.c 00:44:02.627 1 8 libtcmalloc_minimal.so 00:44:02.627 1 904 libcrypto.so 00:44:02.627 ----------------------------------------------------- 00:44:02.627 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.627 bdev_null0 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.627 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 [2024-11-10 00:17:28.720229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 bdev_null1 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 bdev_null2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:02.628 { 00:44:02.628 "params": { 00:44:02.628 "name": "Nvme$subsystem", 00:44:02.628 "trtype": "$TEST_TRANSPORT", 00:44:02.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:02.628 "adrfam": "ipv4", 00:44:02.628 "trsvcid": "$NVMF_PORT", 00:44:02.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:02.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:02.628 "hdgst": ${hdgst:-false}, 00:44:02.628 "ddgst": ${ddgst:-false} 00:44:02.628 }, 00:44:02.628 "method": "bdev_nvme_attach_controller" 00:44:02.628 } 00:44:02.628 EOF 00:44:02.628 )") 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:02.628 { 00:44:02.628 "params": { 00:44:02.628 "name": "Nvme$subsystem", 00:44:02.628 "trtype": "$TEST_TRANSPORT", 00:44:02.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:02.628 "adrfam": "ipv4", 00:44:02.628 "trsvcid": "$NVMF_PORT", 00:44:02.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:02.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:02.628 "hdgst": ${hdgst:-false}, 00:44:02.628 "ddgst": ${ddgst:-false} 00:44:02.628 }, 00:44:02.628 "method": "bdev_nvme_attach_controller" 00:44:02.628 } 00:44:02.628 EOF 00:44:02.628 )") 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:02.628 { 00:44:02.628 "params": { 00:44:02.628 "name": "Nvme$subsystem", 00:44:02.628 "trtype": "$TEST_TRANSPORT", 00:44:02.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:02.628 "adrfam": "ipv4", 00:44:02.628 "trsvcid": "$NVMF_PORT", 00:44:02.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:02.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:02.628 "hdgst": ${hdgst:-false}, 00:44:02.628 "ddgst": ${ddgst:-false} 00:44:02.628 }, 00:44:02.628 "method": "bdev_nvme_attach_controller" 00:44:02.628 } 00:44:02.628 EOF 00:44:02.628 )") 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:02.628 00:17:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:02.628 "params": { 00:44:02.628 "name": "Nvme0", 00:44:02.628 "trtype": "tcp", 00:44:02.628 "traddr": "10.0.0.2", 00:44:02.628 "adrfam": "ipv4", 00:44:02.628 "trsvcid": "4420", 00:44:02.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:02.628 "hdgst": false, 00:44:02.628 "ddgst": false 00:44:02.628 }, 00:44:02.629 "method": "bdev_nvme_attach_controller" 00:44:02.629 },{ 00:44:02.629 "params": { 00:44:02.629 "name": "Nvme1", 00:44:02.629 "trtype": "tcp", 00:44:02.629 "traddr": "10.0.0.2", 00:44:02.629 "adrfam": "ipv4", 00:44:02.629 "trsvcid": "4420", 00:44:02.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:02.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:02.629 "hdgst": false, 00:44:02.629 "ddgst": false 00:44:02.629 }, 00:44:02.629 "method": "bdev_nvme_attach_controller" 00:44:02.629 },{ 00:44:02.629 "params": { 00:44:02.629 "name": "Nvme2", 00:44:02.629 "trtype": "tcp", 00:44:02.629 "traddr": "10.0.0.2", 00:44:02.629 "adrfam": "ipv4", 00:44:02.629 "trsvcid": "4420", 00:44:02.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:02.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:02.629 "hdgst": false, 00:44:02.629 "ddgst": false 00:44:02.629 }, 00:44:02.629 "method": "bdev_nvme_attach_controller" 00:44:02.629 }' 00:44:02.629 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:02.629 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:02.629 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:44:02.629 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:02.629 00:17:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:03.195 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:03.195 ... 00:44:03.195 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:03.195 ... 00:44:03.195 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:03.195 ... 00:44:03.195 fio-3.35 00:44:03.195 Starting 24 threads 00:44:15.451 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712757: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=353, BW=1414KiB/s (1448kB/s)(13.8MiB/10005msec) 00:44:15.451 slat (nsec): min=6612, max=88707, avg=33501.45, stdev=14077.44 00:44:15.451 clat (usec): min=18545, max=99306, avg=44960.10, stdev=4074.07 00:44:15.451 lat (usec): min=18563, max=99326, avg=44993.60, stdev=4071.48 00:44:15.451 clat percentiles (usec): 00:44:15.451 | 1.00th=[42730], 5.00th=[43254], 10.00th=[43779], 20.00th=[44303], 00:44:15.451 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:15.451 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.451 | 99.00th=[50594], 99.50th=[72877], 99.90th=[99091], 99.95th=[99091], 00:44:15.451 | 99.99th=[99091] 00:44:15.451 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:44:15.451 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:44:15.451 lat (msec) : 20=0.06%, 50=98.53%, 100=1.41% 00:44:15.451 cpu : usr=98.18%, sys=1.32%, ctx=17, majf=0, minf=1631 00:44:15.451 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712758: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=355, BW=1420KiB/s (1454kB/s)(13.9MiB/10003msec) 00:44:15.451 slat (usec): min=6, max=107, avg=39.46, stdev=10.76 00:44:15.451 clat (usec): min=22499, max=88035, avg=44692.96, stdev=2523.44 00:44:15.451 lat (usec): min=22554, max=88057, avg=44732.42, stdev=2521.15 00:44:15.451 clat percentiles (usec): 00:44:15.451 | 1.00th=[42730], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:44:15.451 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:15.451 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.451 | 99.00th=[50594], 99.50th=[59507], 99.90th=[69731], 99.95th=[87557], 00:44:15.451 | 99.99th=[87557] 00:44:15.451 bw ( KiB/s): min= 1408, max= 1536, per=4.19%, avg=1421.47, stdev=40.36, samples=19 00:44:15.451 iops : min= 352, max= 384, avg=355.37, stdev=10.09, samples=19 00:44:15.451 lat (msec) : 50=98.65%, 100=1.35% 00:44:15.451 cpu : usr=96.86%, sys=1.96%, ctx=109, majf=0, minf=1633 00:44:15.451 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712760: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=356, BW=1427KiB/s (1461kB/s)(14.0MiB/10049msec) 00:44:15.451 slat (nsec): min=5334, max=77226, avg=22038.10, stdev=8392.40 00:44:15.451 clat (usec): min=15401, max=61807, avg=44667.69, stdev=2770.64 00:44:15.451 lat (usec): min=15427, max=61876, avg=44689.73, stdev=2771.21 00:44:15.451 clat percentiles (usec): 00:44:15.451 | 1.00th=[36439], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:15.451 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:44:15.451 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:44:15.451 | 99.00th=[53216], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:44:15.451 | 99.99th=[61604] 00:44:15.451 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1427.20, stdev=46.89, samples=20 00:44:15.451 iops : min= 352, max= 384, avg=356.80, stdev=11.72, samples=20 00:44:15.451 lat (msec) : 20=0.45%, 50=98.10%, 100=1.45% 00:44:15.451 cpu : usr=96.88%, sys=1.98%, ctx=147, majf=0, minf=1633 00:44:15.451 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712761: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=356, BW=1428KiB/s (1462kB/s)(14.0MiB/10042msec) 00:44:15.451 slat (usec): min=6, max=144, avg=53.81, stdev=25.90 00:44:15.451 clat (usec): min=15227, max=62206, avg=44257.84, stdev=2867.02 00:44:15.451 lat (usec): min=15266, max=62260, avg=44311.65, stdev=2870.43 00:44:15.451 clat percentiles (usec): 00:44:15.451 | 1.00th=[31065], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:44:15.451 | 30.00th=[43779], 40.00th=[44303], 50.00th=[44303], 60.00th=[44303], 00:44:15.451 | 70.00th=[44303], 80.00th=[44827], 90.00th=[45351], 95.00th=[45876], 00:44:15.451 | 99.00th=[50070], 99.50th=[58983], 99.90th=[62129], 99.95th=[62129], 00:44:15.451 | 99.99th=[62129] 00:44:15.451 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1427.20, stdev=46.89, samples=20 00:44:15.451 iops : min= 352, max= 384, avg=356.80, stdev=11.72, samples=20 00:44:15.451 lat (msec) : 20=0.45%, 50=98.44%, 100=1.12% 00:44:15.451 cpu : usr=98.10%, sys=1.36%, ctx=18, majf=0, minf=1634 00:44:15.451 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712762: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=354, BW=1419KiB/s (1453kB/s)(13.9MiB/10010msec) 00:44:15.451 slat (usec): min=10, max=103, avg=40.32, stdev= 9.67 00:44:15.451 clat (usec): min=31278, max=78995, avg=44730.71, stdev=2660.69 00:44:15.451 lat (usec): min=31320, max=79022, avg=44771.03, stdev=2659.69 00:44:15.451 clat percentiles (usec): 00:44:15.451 | 1.00th=[43254], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:44:15.451 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:15.451 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.451 | 99.00th=[50594], 99.50th=[59507], 99.90th=[77071], 99.95th=[79168], 00:44:15.451 | 99.99th=[79168] 00:44:15.451 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1421.47, stdev=58.73, samples=19 00:44:15.451 iops : min= 320, max= 384, avg=355.37, stdev=14.68, samples=19 00:44:15.451 lat (msec) : 50=98.76%, 100=1.24% 00:44:15.451 cpu : usr=96.54%, sys=2.07%, ctx=500, majf=0, minf=1633 00:44:15.451 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712764: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=357, BW=1429KiB/s (1463kB/s)(14.0MiB/10034msec) 00:44:15.451 slat (nsec): min=5566, max=89861, avg=42623.06, stdev=10225.84 00:44:15.451 clat (usec): min=15472, max=62010, avg=44414.60, stdev=3030.84 00:44:15.451 lat (usec): min=15514, max=62036, avg=44457.23, stdev=3032.74 00:44:15.451 clat percentiles (usec): 00:44:15.451 | 1.00th=[31589], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:44:15.451 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44303], 00:44:15.451 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.451 | 99.00th=[50594], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:44:15.451 | 99.99th=[62129] 00:44:15.451 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1428.21, stdev=47.95, samples=19 00:44:15.451 iops : min= 352, max= 384, avg=357.05, stdev=11.99, samples=19 00:44:15.451 lat (msec) : 20=0.39%, 50=98.33%, 100=1.28% 00:44:15.451 cpu : usr=95.66%, sys=2.64%, ctx=269, majf=0, minf=1634 00:44:15.451 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712765: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=353, BW=1413KiB/s (1447kB/s)(13.8MiB/10007msec) 00:44:15.451 slat (usec): min=4, max=112, avg=29.75, stdev=11.03 00:44:15.451 clat (msec): min=18, max=101, avg=45.00, stdev= 4.18 00:44:15.451 lat (msec): min=18, max=101, avg=45.03, stdev= 4.18 00:44:15.451 clat percentiles (msec): 00:44:15.451 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.451 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.451 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:44:15.451 | 99.00th=[ 51], 99.50th=[ 74], 99.90th=[ 102], 99.95th=[ 102], 00:44:15.451 | 99.99th=[ 102] 00:44:15.451 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:44:15.451 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:44:15.451 lat (msec) : 20=0.06%, 50=98.53%, 100=0.96%, 250=0.45% 00:44:15.451 cpu : usr=96.90%, sys=1.86%, ctx=171, majf=0, minf=1633 00:44:15.451 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.451 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.451 filename0: (groupid=0, jobs=1): err= 0: pid=3712766: Sun Nov 10 00:17:40 2024 00:44:15.451 read: IOPS=357, BW=1428KiB/s (1462kB/s)(14.0MiB/10039msec) 00:44:15.451 slat (usec): min=5, max=112, avg=45.73, stdev=22.42 00:44:15.452 clat (usec): min=15187, max=61879, avg=44473.76, stdev=2937.59 00:44:15.452 lat (usec): min=15204, max=61908, avg=44519.49, stdev=2938.78 00:44:15.452 clat percentiles (usec): 00:44:15.452 | 1.00th=[31589], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:44:15.452 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:15.452 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.452 | 99.00th=[50594], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:44:15.452 | 99.99th=[62129] 00:44:15.452 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1427.20, stdev=46.89, samples=20 00:44:15.452 iops : min= 352, max= 384, avg=356.80, stdev=11.72, samples=20 00:44:15.452 lat (msec) : 20=0.45%, 50=98.13%, 100=1.42% 00:44:15.452 cpu : usr=98.41%, sys=1.12%, ctx=16, majf=0, minf=1634 00:44:15.452 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712767: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=353, BW=1414KiB/s (1447kB/s)(13.8MiB/10006msec) 00:44:15.452 slat (usec): min=12, max=100, avg=33.66, stdev=13.88 00:44:15.452 clat (msec): min=18, max=126, avg=44.98, stdev= 4.34 00:44:15.452 lat (msec): min=18, max=126, avg=45.01, stdev= 4.34 00:44:15.452 clat percentiles (msec): 00:44:15.452 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.452 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.452 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:44:15.452 | 99.00th=[ 51], 99.50th=[ 64], 99.90th=[ 102], 99.95th=[ 126], 00:44:15.452 | 99.99th=[ 127] 00:44:15.452 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:44:15.452 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:44:15.452 lat (msec) : 20=0.06%, 50=98.59%, 100=0.90%, 250=0.45% 00:44:15.452 cpu : usr=97.49%, sys=1.75%, ctx=32, majf=0, minf=1633 00:44:15.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712769: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10017msec) 00:44:15.452 slat (usec): min=11, max=119, avg=48.69, stdev=24.57 00:44:15.452 clat (msec): min=35, max=124, avg=44.81, stdev= 5.52 00:44:15.452 lat (msec): min=35, max=124, avg=44.86, stdev= 5.52 00:44:15.452 clat percentiles (msec): 00:44:15.452 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:44:15.452 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.452 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:44:15.452 | 99.00th=[ 51], 99.50th=[ 60], 99.90th=[ 125], 99.95th=[ 125], 00:44:15.452 | 99.99th=[ 125] 00:44:15.452 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.452 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.452 lat (msec) : 50=98.87%, 100=0.68%, 250=0.45% 00:44:15.452 cpu : usr=98.32%, sys=1.19%, ctx=24, majf=0, minf=1631 00:44:15.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712770: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=357, BW=1428KiB/s (1463kB/s)(14.0MiB/10037msec) 00:44:15.452 slat (nsec): min=5455, max=78843, avg=36935.81, stdev=9800.16 00:44:15.452 clat (usec): min=7989, max=63565, avg=44495.82, stdev=3023.88 00:44:15.452 lat (usec): min=8001, max=63599, avg=44532.76, stdev=3024.89 00:44:15.452 clat percentiles (usec): 00:44:15.452 | 1.00th=[31589], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:15.452 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:15.452 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.452 | 99.00th=[50594], 99.50th=[59507], 99.90th=[62129], 99.95th=[63701], 00:44:15.452 | 99.99th=[63701] 00:44:15.452 bw ( KiB/s): min= 1280, max= 1536, per=4.20%, avg=1427.20, stdev=62.64, samples=20 00:44:15.452 iops : min= 320, max= 384, avg=356.80, stdev=15.66, samples=20 00:44:15.452 lat (msec) : 10=0.06%, 20=0.39%, 50=98.21%, 100=1.34% 00:44:15.452 cpu : usr=96.54%, sys=2.07%, ctx=258, majf=0, minf=1632 00:44:15.452 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712771: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=354, BW=1417KiB/s (1451kB/s)(13.9MiB/10027msec) 00:44:15.452 slat (usec): min=5, max=103, avg=45.42, stdev=12.77 00:44:15.452 clat (usec): min=29765, max=96138, avg=44749.45, stdev=3676.21 00:44:15.452 lat (usec): min=29807, max=96158, avg=44794.87, stdev=3674.42 00:44:15.452 clat percentiles (usec): 00:44:15.452 | 1.00th=[43254], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:44:15.452 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44303], 00:44:15.452 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[45876], 00:44:15.452 | 99.00th=[50594], 99.50th=[59507], 99.90th=[93848], 99.95th=[95945], 00:44:15.452 | 99.99th=[95945] 00:44:15.452 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1421.47, stdev=58.73, samples=19 00:44:15.452 iops : min= 320, max= 384, avg=355.37, stdev=14.68, samples=19 00:44:15.452 lat (msec) : 50=98.76%, 100=1.24% 00:44:15.452 cpu : usr=98.16%, sys=1.35%, ctx=16, majf=0, minf=1632 00:44:15.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712773: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=357, BW=1428KiB/s (1462kB/s)(14.0MiB/10039msec) 00:44:15.452 slat (usec): min=7, max=118, avg=45.21, stdev=22.94 00:44:15.452 clat (usec): min=15246, max=63629, avg=44480.01, stdev=2957.15 00:44:15.452 lat (usec): min=15264, max=63657, avg=44525.22, stdev=2957.04 00:44:15.452 clat percentiles (usec): 00:44:15.452 | 1.00th=[31327], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:44:15.452 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:44:15.452 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[46400], 00:44:15.452 | 99.00th=[50594], 99.50th=[58983], 99.90th=[62129], 99.95th=[63701], 00:44:15.452 | 99.99th=[63701] 00:44:15.452 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1427.20, stdev=46.89, samples=20 00:44:15.452 iops : min= 352, max= 384, avg=356.80, stdev=11.72, samples=20 00:44:15.452 lat (msec) : 20=0.45%, 50=98.27%, 100=1.28% 00:44:15.452 cpu : usr=98.38%, sys=1.14%, ctx=19, majf=0, minf=1634 00:44:15.452 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712774: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=353, BW=1412KiB/s (1446kB/s)(13.8MiB/10014msec) 00:44:15.452 slat (nsec): min=11836, max=85401, avg=34237.98, stdev=13633.79 00:44:15.452 clat (msec): min=36, max=124, avg=44.97, stdev= 4.96 00:44:15.452 lat (msec): min=36, max=124, avg=45.00, stdev= 4.96 00:44:15.452 clat percentiles (msec): 00:44:15.452 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.452 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.452 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:44:15.452 | 99.00th=[ 52], 99.50th=[ 55], 99.90th=[ 116], 99.95th=[ 125], 00:44:15.452 | 99.99th=[ 125] 00:44:15.452 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.452 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.452 lat (msec) : 50=98.64%, 100=0.90%, 250=0.45% 00:44:15.452 cpu : usr=98.36%, sys=1.15%, ctx=12, majf=0, minf=1633 00:44:15.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.452 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.452 filename1: (groupid=0, jobs=1): err= 0: pid=3712775: Sun Nov 10 00:17:40 2024 00:44:15.452 read: IOPS=353, BW=1412KiB/s (1446kB/s)(13.8MiB/10016msec) 00:44:15.452 slat (usec): min=13, max=119, avg=46.33, stdev=23.91 00:44:15.452 clat (msec): min=35, max=122, avg=44.82, stdev= 5.41 00:44:15.452 lat (msec): min=35, max=122, avg=44.87, stdev= 5.41 00:44:15.452 clat percentiles (msec): 00:44:15.452 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:44:15.452 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.452 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:44:15.452 | 99.00th=[ 51], 99.50th=[ 60], 99.90th=[ 123], 99.95th=[ 123], 00:44:15.452 | 99.99th=[ 123] 00:44:15.452 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.452 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.452 lat (msec) : 50=98.84%, 100=0.71%, 250=0.45% 00:44:15.452 cpu : usr=97.94%, sys=1.56%, ctx=15, majf=0, minf=1633 00:44:15.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename1: (groupid=0, jobs=1): err= 0: pid=3712777: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10025msec) 00:44:15.453 slat (usec): min=8, max=111, avg=32.52, stdev=11.71 00:44:15.453 clat (msec): min=42, max=122, avg=45.06, stdev= 5.34 00:44:15.453 lat (msec): min=42, max=122, avg=45.09, stdev= 5.34 00:44:15.453 clat percentiles (msec): 00:44:15.453 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.453 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.453 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:44:15.453 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 123], 99.95th=[ 123], 00:44:15.453 | 99.99th=[ 123] 00:44:15.453 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.453 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.453 lat (msec) : 50=98.64%, 100=0.90%, 250=0.45% 00:44:15.453 cpu : usr=98.14%, sys=1.22%, ctx=50, majf=0, minf=1631 00:44:15.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename2: (groupid=0, jobs=1): err= 0: pid=3712778: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=353, BW=1412KiB/s (1446kB/s)(13.8MiB/10016msec) 00:44:15.453 slat (usec): min=16, max=158, avg=43.30, stdev=22.41 00:44:15.453 clat (msec): min=35, max=123, avg=44.87, stdev= 5.44 00:44:15.453 lat (msec): min=35, max=123, avg=44.91, stdev= 5.44 00:44:15.453 clat percentiles (msec): 00:44:15.453 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:44:15.453 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.453 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:44:15.453 | 99.00th=[ 51], 99.50th=[ 60], 99.90th=[ 124], 99.95th=[ 124], 00:44:15.453 | 99.99th=[ 124] 00:44:15.453 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.453 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.453 lat (msec) : 50=98.70%, 100=0.85%, 250=0.45% 00:44:15.453 cpu : usr=98.04%, sys=1.42%, ctx=13, majf=0, minf=1633 00:44:15.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename2: (groupid=0, jobs=1): err= 0: pid=3712779: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=354, BW=1420KiB/s (1454kB/s)(13.9MiB/10008msec) 00:44:15.453 slat (nsec): min=7133, max=98944, avg=44964.40, stdev=13600.87 00:44:15.453 clat (usec): min=31380, max=74409, avg=44658.63, stdev=2527.12 00:44:15.453 lat (usec): min=31403, max=74428, avg=44703.59, stdev=2526.45 00:44:15.453 clat percentiles (usec): 00:44:15.453 | 1.00th=[43254], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:44:15.453 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44303], 00:44:15.453 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[45876], 00:44:15.453 | 99.00th=[50070], 99.50th=[59507], 99.90th=[73925], 99.95th=[73925], 00:44:15.453 | 99.99th=[73925] 00:44:15.453 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1421.47, stdev=72.59, samples=19 00:44:15.453 iops : min= 320, max= 384, avg=355.37, stdev=18.15, samples=19 00:44:15.453 lat (msec) : 50=98.82%, 100=1.18% 00:44:15.453 cpu : usr=98.25%, sys=1.22%, ctx=16, majf=0, minf=1633 00:44:15.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename2: (groupid=0, jobs=1): err= 0: pid=3712781: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=354, BW=1420KiB/s (1454kB/s)(13.9MiB/10008msec) 00:44:15.453 slat (usec): min=13, max=143, avg=50.30, stdev=14.69 00:44:15.453 clat (usec): min=30953, max=74690, avg=44624.73, stdev=2575.14 00:44:15.453 lat (usec): min=30998, max=74731, avg=44675.03, stdev=2572.85 00:44:15.453 clat percentiles (usec): 00:44:15.453 | 1.00th=[42730], 5.00th=[43254], 10.00th=[43779], 20.00th=[43779], 00:44:15.453 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44303], 00:44:15.453 | 70.00th=[44827], 80.00th=[44827], 90.00th=[45351], 95.00th=[45876], 00:44:15.453 | 99.00th=[50594], 99.50th=[59507], 99.90th=[74974], 99.95th=[74974], 00:44:15.453 | 99.99th=[74974] 00:44:15.453 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1421.47, stdev=72.59, samples=19 00:44:15.453 iops : min= 320, max= 384, avg=355.37, stdev=18.15, samples=19 00:44:15.453 lat (msec) : 50=98.73%, 100=1.27% 00:44:15.453 cpu : usr=98.16%, sys=1.34%, ctx=17, majf=0, minf=1633 00:44:15.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename2: (groupid=0, jobs=1): err= 0: pid=3712782: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10017msec) 00:44:15.453 slat (nsec): min=13200, max=95979, avg=36466.94, stdev=15273.10 00:44:15.453 clat (msec): min=36, max=119, avg=45.01, stdev= 5.14 00:44:15.453 lat (msec): min=36, max=119, avg=45.05, stdev= 5.14 00:44:15.453 clat percentiles (msec): 00:44:15.453 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.453 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.453 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:44:15.453 | 99.00th=[ 53], 99.50th=[ 55], 99.90th=[ 120], 99.95th=[ 120], 00:44:15.453 | 99.99th=[ 120] 00:44:15.453 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.453 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.453 lat (msec) : 50=98.53%, 100=1.02%, 250=0.45% 00:44:15.453 cpu : usr=96.51%, sys=2.21%, ctx=255, majf=0, minf=1633 00:44:15.453 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename2: (groupid=0, jobs=1): err= 0: pid=3712783: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=353, BW=1412KiB/s (1446kB/s)(13.8MiB/10014msec) 00:44:15.453 slat (nsec): min=12090, max=86482, avg=37748.94, stdev=14931.53 00:44:15.453 clat (msec): min=37, max=115, avg=44.94, stdev= 4.90 00:44:15.453 lat (msec): min=37, max=115, avg=44.98, stdev= 4.90 00:44:15.453 clat percentiles (msec): 00:44:15.453 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.453 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.453 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:44:15.453 | 99.00th=[ 52], 99.50th=[ 55], 99.90th=[ 116], 99.95th=[ 116], 00:44:15.453 | 99.99th=[ 116] 00:44:15.453 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1414.74, stdev=79.52, samples=19 00:44:15.453 iops : min= 288, max= 384, avg=353.68, stdev=19.88, samples=19 00:44:15.453 lat (msec) : 50=98.59%, 100=0.96%, 250=0.45% 00:44:15.453 cpu : usr=98.33%, sys=1.17%, ctx=14, majf=0, minf=1635 00:44:15.453 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.453 filename2: (groupid=0, jobs=1): err= 0: pid=3712785: Sun Nov 10 00:17:40 2024 00:44:15.453 read: IOPS=354, BW=1416KiB/s (1450kB/s)(13.9MiB/10032msec) 00:44:15.453 slat (usec): min=14, max=156, avg=57.26, stdev=19.96 00:44:15.453 clat (msec): min=29, max=100, avg=44.70, stdev= 3.98 00:44:15.453 lat (msec): min=29, max=100, avg=44.76, stdev= 3.97 00:44:15.453 clat percentiles (msec): 00:44:15.453 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 44], 00:44:15.453 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.453 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:44:15.453 | 99.00th=[ 51], 99.50th=[ 60], 99.90th=[ 99], 99.95th=[ 101], 00:44:15.453 | 99.99th=[ 101] 00:44:15.453 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=51.80, samples=19 00:44:15.453 iops : min= 320, max= 384, avg=353.68, stdev=12.95, samples=19 00:44:15.453 lat (msec) : 50=98.73%, 100=1.21%, 250=0.06% 00:44:15.453 cpu : usr=98.29%, sys=1.23%, ctx=20, majf=0, minf=1633 00:44:15.453 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.453 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.454 filename2: (groupid=0, jobs=1): err= 0: pid=3712786: Sun Nov 10 00:17:40 2024 00:44:15.454 read: IOPS=353, BW=1413KiB/s (1447kB/s)(13.8MiB/10008msec) 00:44:15.454 slat (usec): min=12, max=113, avg=26.65, stdev=10.66 00:44:15.454 clat (msec): min=18, max=100, avg=45.04, stdev= 4.15 00:44:15.454 lat (msec): min=18, max=101, avg=45.07, stdev= 4.15 00:44:15.454 clat percentiles (msec): 00:44:15.454 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:44:15.454 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:44:15.454 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:44:15.454 | 99.00th=[ 51], 99.50th=[ 72], 99.90th=[ 102], 99.95th=[ 102], 00:44:15.454 | 99.99th=[ 102] 00:44:15.454 bw ( KiB/s): min= 1280, max= 1536, per=4.16%, avg=1414.74, stdev=49.84, samples=19 00:44:15.454 iops : min= 320, max= 384, avg=353.68, stdev=12.46, samples=19 00:44:15.454 lat (msec) : 20=0.06%, 50=98.53%, 100=0.96%, 250=0.45% 00:44:15.454 cpu : usr=96.11%, sys=2.32%, ctx=203, majf=0, minf=1631 00:44:15.454 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:15.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.454 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.454 filename2: (groupid=0, jobs=1): err= 0: pid=3712787: Sun Nov 10 00:17:40 2024 00:44:15.454 read: IOPS=357, BW=1431KiB/s (1465kB/s)(14.0MiB/10019msec) 00:44:15.454 slat (nsec): min=5439, max=94856, avg=19357.78, stdev=13332.51 00:44:15.454 clat (usec): min=16597, max=61518, avg=44520.92, stdev=2851.01 00:44:15.454 lat (usec): min=16610, max=61542, avg=44540.28, stdev=2851.99 00:44:15.454 clat percentiles (usec): 00:44:15.454 | 1.00th=[29754], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:44:15.454 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:44:15.454 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:44:15.454 | 99.00th=[48497], 99.50th=[52167], 99.90th=[61604], 99.95th=[61604], 00:44:15.454 | 99.99th=[61604] 00:44:15.454 bw ( KiB/s): min= 1408, max= 1536, per=4.20%, avg=1427.20, stdev=46.89, samples=20 00:44:15.454 iops : min= 352, max= 384, avg=356.80, stdev=11.72, samples=20 00:44:15.454 lat (msec) : 20=0.45%, 50=98.60%, 100=0.95% 00:44:15.454 cpu : usr=97.16%, sys=1.86%, ctx=81, majf=0, minf=1632 00:44:15.454 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:15.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.454 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.454 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:15.454 00:44:15.454 Run status group 0 (all jobs): 00:44:15.454 READ: bw=33.2MiB/s (34.8MB/s), 1411KiB/s-1431KiB/s (1445kB/s-1465kB/s), io=333MiB (349MB), run=10003-10049msec 00:44:15.454 ----------------------------------------------------- 00:44:15.454 Suppressions used: 00:44:15.454 count bytes template 00:44:15.454 45 402 /usr/src/fio/parse.c 00:44:15.454 1 8 libtcmalloc_minimal.so 00:44:15.454 1 904 libcrypto.so 00:44:15.454 ----------------------------------------------------- 00:44:15.454 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.713 bdev_null0 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.713 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 [2024-11-10 00:17:41.744038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 bdev_null1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:15.714 { 00:44:15.714 "params": { 00:44:15.714 "name": "Nvme$subsystem", 00:44:15.714 "trtype": "$TEST_TRANSPORT", 00:44:15.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:15.714 "adrfam": "ipv4", 00:44:15.714 "trsvcid": "$NVMF_PORT", 00:44:15.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:15.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:15.714 "hdgst": ${hdgst:-false}, 00:44:15.714 "ddgst": ${ddgst:-false} 00:44:15.714 }, 00:44:15.714 "method": "bdev_nvme_attach_controller" 00:44:15.714 } 00:44:15.714 EOF 00:44:15.714 )") 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:15.714 { 00:44:15.714 "params": { 00:44:15.714 "name": "Nvme$subsystem", 00:44:15.714 "trtype": "$TEST_TRANSPORT", 00:44:15.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:15.714 "adrfam": "ipv4", 00:44:15.714 "trsvcid": "$NVMF_PORT", 00:44:15.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:15.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:15.714 "hdgst": ${hdgst:-false}, 00:44:15.714 "ddgst": ${ddgst:-false} 00:44:15.714 }, 00:44:15.714 "method": "bdev_nvme_attach_controller" 00:44:15.714 } 00:44:15.714 EOF 00:44:15.714 )") 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:15.714 00:17:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:15.714 "params": { 00:44:15.714 "name": "Nvme0", 00:44:15.714 "trtype": "tcp", 00:44:15.714 "traddr": "10.0.0.2", 00:44:15.714 "adrfam": "ipv4", 00:44:15.714 "trsvcid": "4420", 00:44:15.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:15.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:15.714 "hdgst": false, 00:44:15.714 "ddgst": false 00:44:15.714 }, 00:44:15.714 "method": "bdev_nvme_attach_controller" 00:44:15.714 },{ 00:44:15.714 "params": { 00:44:15.714 "name": "Nvme1", 00:44:15.714 "trtype": "tcp", 00:44:15.714 "traddr": "10.0.0.2", 00:44:15.714 "adrfam": "ipv4", 00:44:15.714 "trsvcid": "4420", 00:44:15.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:15.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:15.714 "hdgst": false, 00:44:15.714 "ddgst": false 00:44:15.714 }, 00:44:15.714 "method": "bdev_nvme_attach_controller" 00:44:15.714 }' 00:44:15.715 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:15.715 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:15.715 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:44:15.715 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:15.715 00:17:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:15.973 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:15.973 ... 00:44:15.973 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:15.973 ... 00:44:15.973 fio-3.35 00:44:15.973 Starting 4 threads 00:44:22.544 00:44:22.544 filename0: (groupid=0, jobs=1): err= 0: pid=3714194: Sun Nov 10 00:17:48 2024 00:44:22.544 read: IOPS=1461, BW=11.4MiB/s (12.0MB/s)(57.1MiB/5001msec) 00:44:22.544 slat (nsec): min=5562, max=60028, avg=22317.14, stdev=7620.01 00:44:22.544 clat (usec): min=1032, max=9725, avg=5387.99, stdev=805.59 00:44:22.544 lat (usec): min=1052, max=9745, avg=5410.31, stdev=805.75 00:44:22.544 clat percentiles (usec): 00:44:22.544 | 1.00th=[ 1958], 5.00th=[ 4948], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:22.544 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:22.544 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5866], 00:44:22.544 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[ 9503], 99.95th=[ 9634], 00:44:22.544 | 99.99th=[ 9765] 00:44:22.544 bw ( KiB/s): min=11414, max=11904, per=24.97%, avg=11689.56, stdev=135.90, samples=9 00:44:22.544 iops : min= 1426, max= 1488, avg=1461.11, stdev=17.18, samples=9 00:44:22.544 lat (msec) : 2=1.08%, 4=1.59%, 10=97.33% 00:44:22.544 cpu : usr=95.12%, sys=3.90%, ctx=107, majf=0, minf=1633 00:44:22.544 IO depths : 1=0.1%, 2=20.3%, 4=53.5%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.544 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.544 issued rwts: total=7310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.544 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.544 filename0: (groupid=0, jobs=1): err= 0: pid=3714195: Sun Nov 10 00:17:48 2024 00:44:22.544 read: IOPS=1462, BW=11.4MiB/s (12.0MB/s)(57.2MiB/5004msec) 00:44:22.544 slat (nsec): min=4805, max=58528, avg=19765.75, stdev=7724.88 00:44:22.544 clat (usec): min=965, max=10582, avg=5405.39, stdev=390.30 00:44:22.544 lat (usec): min=975, max=10612, avg=5425.16, stdev=390.50 00:44:22.544 clat percentiles (usec): 00:44:22.544 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:22.545 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:44:22.545 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5800], 00:44:22.545 | 99.00th=[ 5997], 99.50th=[ 7242], 99.90th=[10290], 99.95th=[10290], 00:44:22.545 | 99.99th=[10552] 00:44:22.545 bw ( KiB/s): min=11392, max=11904, per=24.98%, avg=11694.40, stdev=141.80, samples=10 00:44:22.545 iops : min= 1424, max= 1488, avg=1461.80, stdev=17.73, samples=10 00:44:22.545 lat (usec) : 1000=0.03% 00:44:22.545 lat (msec) : 2=0.03%, 4=0.25%, 10=99.59%, 20=0.11% 00:44:22.545 cpu : usr=95.70%, sys=3.74%, ctx=12, majf=0, minf=1636 00:44:22.545 IO depths : 1=0.9%, 2=10.8%, 4=62.0%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.545 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.545 issued rwts: total=7317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.545 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.545 filename1: (groupid=0, jobs=1): err= 0: pid=3714196: Sun Nov 10 00:17:48 2024 00:44:22.545 read: IOPS=1465, BW=11.4MiB/s (12.0MB/s)(57.2MiB/5001msec) 00:44:22.545 slat (nsec): min=5549, max=73629, avg=23707.21, stdev=8931.99 00:44:22.545 clat (usec): min=1160, max=9920, avg=5361.49, stdev=422.91 00:44:22.545 lat (usec): min=1182, max=9946, avg=5385.20, stdev=423.73 00:44:22.545 clat percentiles (usec): 00:44:22.545 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:22.545 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:22.545 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5604], 95.00th=[ 5669], 00:44:22.545 | 99.00th=[ 6521], 99.50th=[ 8029], 99.90th=[ 8848], 99.95th=[ 9110], 00:44:22.545 | 99.99th=[ 9896] 00:44:22.545 bw ( KiB/s): min=11392, max=11904, per=25.03%, avg=11719.11, stdev=158.21, samples=9 00:44:22.545 iops : min= 1424, max= 1488, avg=1464.89, stdev=19.78, samples=9 00:44:22.545 lat (msec) : 2=0.14%, 4=0.66%, 10=99.21% 00:44:22.545 cpu : usr=95.84%, sys=3.58%, ctx=11, majf=0, minf=1636 00:44:22.545 IO depths : 1=2.3%, 2=23.8%, 4=51.1%, 8=22.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.545 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.545 issued rwts: total=7328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.545 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.545 filename1: (groupid=0, jobs=1): err= 0: pid=3714197: Sun Nov 10 00:17:48 2024 00:44:22.545 read: IOPS=1465, BW=11.4MiB/s (12.0MB/s)(57.2MiB/5002msec) 00:44:22.545 slat (nsec): min=5315, max=73655, avg=23693.71, stdev=8859.39 00:44:22.545 clat (usec): min=1035, max=9891, avg=5362.90, stdev=905.62 00:44:22.545 lat (usec): min=1055, max=9914, avg=5386.60, stdev=906.01 00:44:22.545 clat percentiles (usec): 00:44:22.545 | 1.00th=[ 1500], 5.00th=[ 4948], 10.00th=[ 5145], 20.00th=[ 5211], 00:44:22.545 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:22.545 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5604], 95.00th=[ 5800], 00:44:22.545 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[ 9634], 99.95th=[ 9765], 00:44:22.545 | 99.99th=[ 9896] 00:44:22.545 bw ( KiB/s): min=11392, max=11920, per=25.03%, avg=11719.11, stdev=185.41, samples=9 00:44:22.545 iops : min= 1424, max= 1490, avg=1464.89, stdev=23.18, samples=9 00:44:22.545 lat (msec) : 2=1.94%, 4=1.15%, 10=96.92% 00:44:22.545 cpu : usr=95.00%, sys=3.64%, ctx=148, majf=0, minf=1636 00:44:22.545 IO depths : 1=0.9%, 2=23.5%, 4=51.0%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.545 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.545 issued rwts: total=7328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.545 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:22.545 00:44:22.545 Run status group 0 (all jobs): 00:44:22.545 READ: bw=45.7MiB/s (47.9MB/s), 11.4MiB/s-11.4MiB/s (12.0MB/s-12.0MB/s), io=229MiB (240MB), run=5001-5004msec 00:44:23.112 ----------------------------------------------------- 00:44:23.112 Suppressions used: 00:44:23.112 count bytes template 00:44:23.112 6 52 /usr/src/fio/parse.c 00:44:23.112 1 8 libtcmalloc_minimal.so 00:44:23.112 1 904 libcrypto.so 00:44:23.112 ----------------------------------------------------- 00:44:23.112 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:44:23.112 real 0m27.957s 00:44:23.112 user 4m36.157s 00:44:23.112 sys 0m6.905s 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 ************************************ 00:44:23.112 END TEST fio_dif_rand_params 00:44:23.112 ************************************ 00:44:23.112 00:17:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:23.112 00:17:49 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:44:23.112 00:17:49 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 ************************************ 00:44:23.112 START TEST fio_dif_digest 00:44:23.112 ************************************ 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 bdev_null0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:23.112 [2024-11-10 00:17:49.301378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:23.112 { 00:44:23.112 "params": { 00:44:23.112 "name": "Nvme$subsystem", 00:44:23.112 "trtype": "$TEST_TRANSPORT", 00:44:23.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:23.112 "adrfam": "ipv4", 00:44:23.112 "trsvcid": "$NVMF_PORT", 00:44:23.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:23.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:23.112 "hdgst": ${hdgst:-false}, 00:44:23.112 "ddgst": ${ddgst:-false} 00:44:23.112 }, 00:44:23.112 "method": "bdev_nvme_attach_controller" 00:44:23.112 } 00:44:23.112 EOF 00:44:23.112 )") 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:44:23.112 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:23.370 "params": { 00:44:23.370 "name": "Nvme0", 00:44:23.370 "trtype": "tcp", 00:44:23.370 "traddr": "10.0.0.2", 00:44:23.370 "adrfam": "ipv4", 00:44:23.370 "trsvcid": "4420", 00:44:23.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:23.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:23.370 "hdgst": true, 00:44:23.370 "ddgst": true 00:44:23.370 }, 00:44:23.370 "method": "bdev_nvme_attach_controller" 00:44:23.370 }' 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # break 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:23.370 00:17:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:23.628 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:23.628 ... 00:44:23.628 fio-3.35 00:44:23.628 Starting 3 threads 00:44:35.829 00:44:35.829 filename0: (groupid=0, jobs=1): err= 0: pid=3715184: Sun Nov 10 00:18:00 2024 00:44:35.829 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(218MiB/10048msec) 00:44:35.829 slat (nsec): min=6487, max=62409, avg=23968.50, stdev=5179.78 00:44:35.829 clat (usec): min=13495, max=50888, avg=17201.73, stdev=1583.72 00:44:35.829 lat (usec): min=13521, max=50927, avg=17225.70, stdev=1583.87 00:44:35.829 clat percentiles (usec): 00:44:35.829 | 1.00th=[14484], 5.00th=[15401], 10.00th=[15795], 20.00th=[16319], 00:44:35.829 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:44:35.829 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:44:35.829 | 99.00th=[20055], 99.50th=[20579], 99.90th=[48497], 99.95th=[51119], 00:44:35.829 | 99.99th=[51119] 00:44:35.829 bw ( KiB/s): min=21760, max=23040, per=33.94%, avg=22336.00, stdev=360.85, samples=20 00:44:35.829 iops : min= 170, max= 180, avg=174.50, stdev= 2.82, samples=20 00:44:35.829 lat (msec) : 20=98.63%, 50=1.32%, 100=0.06% 00:44:35.829 cpu : usr=95.47%, sys=4.00%, ctx=17, majf=0, minf=1636 00:44:35.829 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:35.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:35.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:35.829 issued rwts: total=1747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:35.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:35.829 filename0: (groupid=0, jobs=1): err= 0: pid=3715185: Sun Nov 10 00:18:00 2024 00:44:35.829 read: IOPS=170, BW=21.3MiB/s (22.3MB/s)(214MiB/10045msec) 00:44:35.829 slat (nsec): min=5170, max=53034, avg=24360.53, stdev=4192.96 00:44:35.829 clat (usec): min=13612, max=56291, avg=17559.33, stdev=1692.62 00:44:35.829 lat (usec): min=13634, max=56314, avg=17583.69, stdev=1692.59 00:44:35.829 clat percentiles (usec): 00:44:35.829 | 1.00th=[14877], 5.00th=[15795], 10.00th=[16188], 20.00th=[16712], 00:44:35.829 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:44:35.829 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19268], 00:44:35.829 | 99.00th=[20317], 99.50th=[21103], 99.90th=[53740], 99.95th=[56361], 00:44:35.829 | 99.99th=[56361] 00:44:35.829 bw ( KiB/s): min=20992, max=22528, per=33.24%, avg=21875.20, stdev=443.21, samples=20 00:44:35.829 iops : min= 164, max= 176, avg=170.90, stdev= 3.46, samples=20 00:44:35.829 lat (msec) : 20=97.84%, 50=2.05%, 100=0.12% 00:44:35.829 cpu : usr=95.47%, sys=3.80%, ctx=68, majf=0, minf=1639 00:44:35.829 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:35.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:35.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:35.829 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:35.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:35.829 filename0: (groupid=0, jobs=1): err= 0: pid=3715186: Sun Nov 10 00:18:00 2024 00:44:35.829 read: IOPS=170, BW=21.3MiB/s (22.3MB/s)(214MiB/10046msec) 00:44:35.829 slat (nsec): min=5388, max=47643, avg=23442.31, stdev=3736.98 00:44:35.829 clat (usec): min=14037, max=56376, avg=17592.35, stdev=1635.60 00:44:35.829 lat (usec): min=14056, max=56398, avg=17615.79, stdev=1635.94 00:44:35.829 clat percentiles (usec): 00:44:35.829 | 1.00th=[15401], 5.00th=[15926], 10.00th=[16319], 20.00th=[16712], 00:44:35.829 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:44:35.829 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:44:35.829 | 99.00th=[20579], 99.50th=[21627], 99.90th=[49021], 99.95th=[56361], 00:44:35.829 | 99.99th=[56361] 00:44:35.829 bw ( KiB/s): min=20521, max=22272, per=33.18%, avg=21838.85, stdev=417.47, samples=20 00:44:35.829 iops : min= 160, max= 174, avg=170.60, stdev= 3.32, samples=20 00:44:35.829 lat (msec) : 20=98.30%, 50=1.64%, 100=0.06% 00:44:35.829 cpu : usr=95.96%, sys=3.45%, ctx=15, majf=0, minf=1634 00:44:35.829 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:35.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:35.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:35.829 issued rwts: total=1708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:35.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:35.829 00:44:35.829 Run status group 0 (all jobs): 00:44:35.829 READ: bw=64.3MiB/s (67.4MB/s), 21.3MiB/s-21.7MiB/s (22.3MB/s-22.8MB/s), io=646MiB (677MB), run=10045-10048msec 00:44:35.829 ----------------------------------------------------- 00:44:35.829 Suppressions used: 00:44:35.829 count bytes template 00:44:35.829 5 44 /usr/src/fio/parse.c 00:44:35.829 1 8 libtcmalloc_minimal.so 00:44:35.829 1 904 libcrypto.so 00:44:35.829 ----------------------------------------------------- 00:44:35.829 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.829 00:44:35.829 real 0m12.405s 00:44:35.829 user 0m31.043s 00:44:35.829 sys 0m1.603s 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:35.829 00:18:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:35.829 ************************************ 00:44:35.829 END TEST fio_dif_digest 00:44:35.829 ************************************ 00:44:35.829 00:18:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:35.829 00:18:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:35.829 rmmod nvme_tcp 00:44:35.829 rmmod nvme_fabrics 00:44:35.829 rmmod nvme_keyring 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3708407 ']' 00:44:35.829 00:18:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3708407 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3708407 ']' 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3708407 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3708407 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3708407' 00:44:35.829 killing process with pid 3708407 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3708407 00:44:35.829 00:18:01 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3708407 00:44:36.766 00:18:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:36.766 00:18:02 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:37.702 Waiting for block devices as requested 00:44:37.966 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:37.966 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:38.226 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:38.226 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:38.226 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:38.226 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:38.483 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:38.483 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:38.483 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:38.483 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:38.483 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:38.742 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:38.742 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:38.742 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:39.000 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:39.000 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:39.000 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:39.258 00:18:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.258 00:18:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:39.258 00:18:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.156 00:18:07 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:41.156 00:44:41.156 real 1m15.588s 00:44:41.156 user 6m45.904s 00:44:41.156 sys 0m17.644s 00:44:41.156 00:18:07 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:41.157 00:18:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:41.157 ************************************ 00:44:41.157 END TEST nvmf_dif 00:44:41.157 ************************************ 00:44:41.157 00:18:07 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:41.157 00:18:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:44:41.157 00:18:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:41.157 00:18:07 -- common/autotest_common.sh@10 -- # set +x 00:44:41.157 ************************************ 00:44:41.157 START TEST nvmf_abort_qd_sizes 00:44:41.157 ************************************ 00:44:41.157 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:41.157 * Looking for test storage... 00:44:41.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:41.157 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:41.157 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:44:41.157 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.416 --rc genhtml_branch_coverage=1 00:44:41.416 --rc genhtml_function_coverage=1 00:44:41.416 --rc genhtml_legend=1 00:44:41.416 --rc geninfo_all_blocks=1 00:44:41.416 --rc geninfo_unexecuted_blocks=1 00:44:41.416 00:44:41.416 ' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.416 --rc genhtml_branch_coverage=1 00:44:41.416 --rc genhtml_function_coverage=1 00:44:41.416 --rc genhtml_legend=1 00:44:41.416 --rc geninfo_all_blocks=1 00:44:41.416 --rc geninfo_unexecuted_blocks=1 00:44:41.416 00:44:41.416 ' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.416 --rc genhtml_branch_coverage=1 00:44:41.416 --rc genhtml_function_coverage=1 00:44:41.416 --rc genhtml_legend=1 00:44:41.416 --rc geninfo_all_blocks=1 00:44:41.416 --rc geninfo_unexecuted_blocks=1 00:44:41.416 00:44:41.416 ' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:41.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.416 --rc genhtml_branch_coverage=1 00:44:41.416 --rc genhtml_function_coverage=1 00:44:41.416 --rc genhtml_legend=1 00:44:41.416 --rc geninfo_all_blocks=1 00:44:41.416 --rc geninfo_unexecuted_blocks=1 00:44:41.416 00:44:41.416 ' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:41.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:41.416 00:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:43.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:43.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:43.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:43.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:43.319 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:43.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:43.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:44:43.320 00:44:43.320 --- 10.0.0.2 ping statistics --- 00:44:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.320 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:44:43.320 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:43.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:43.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:44:43.320 00:44:43.320 --- 10.0.0.1 ping statistics --- 00:44:43.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.320 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:44:43.320 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:43.320 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:43.320 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:43.320 00:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:44.697 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:44.697 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:44.697 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:45.270 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3720728 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3720728 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3720728 ']' 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:45.533 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:45.534 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:45.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:45.534 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:45.534 00:18:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:45.793 [2024-11-10 00:18:11.757448] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:44:45.793 [2024-11-10 00:18:11.757606] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:45.793 [2024-11-10 00:18:11.908282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:46.052 [2024-11-10 00:18:12.052922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:46.052 [2024-11-10 00:18:12.053001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:46.052 [2024-11-10 00:18:12.053027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:46.052 [2024-11-10 00:18:12.053052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:46.052 [2024-11-10 00:18:12.053073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:46.052 [2024-11-10 00:18:12.055937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:46.052 [2024-11-10 00:18:12.056007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:46.052 [2024-11-10 00:18:12.056107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:46.052 [2024-11-10 00:18:12.056114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:46.618 00:18:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:46.618 ************************************ 00:44:46.618 START TEST spdk_target_abort 00:44:46.618 ************************************ 00:44:46.618 00:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:44:46.618 00:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:46.619 00:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:46.619 00:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.619 00:18:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.900 spdk_targetn1 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.900 [2024-11-10 00:18:15.662532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.900 [2024-11-10 00:18:15.708910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:49.900 00:18:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:53.180 Initializing NVMe Controllers 00:44:53.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:53.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:53.180 Initialization complete. Launching workers. 00:44:53.180 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9472, failed: 0 00:44:53.180 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1223, failed to submit 8249 00:44:53.180 success 699, unsuccessful 524, failed 0 00:44:53.180 00:18:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:53.180 00:18:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:56.526 Initializing NVMe Controllers 00:44:56.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:56.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:56.526 Initialization complete. Launching workers. 00:44:56.526 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8397, failed: 0 00:44:56.526 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7162 00:44:56.526 success 304, unsuccessful 931, failed 0 00:44:56.526 00:18:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:56.526 00:18:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:59.808 Initializing NVMe Controllers 00:44:59.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:59.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:59.808 Initialization complete. Launching workers. 00:44:59.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27398, failed: 0 00:44:59.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2733, failed to submit 24665 00:44:59.808 success 211, unsuccessful 2522, failed 0 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:59.808 00:18:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3720728 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3720728 ']' 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3720728 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3720728 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3720728' 00:45:01.181 killing process with pid 3720728 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3720728 00:45:01.181 00:18:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3720728 00:45:02.114 00:45:02.114 real 0m15.445s 00:45:02.114 user 1m0.232s 00:45:02.114 sys 0m2.840s 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:02.114 ************************************ 00:45:02.114 END TEST spdk_target_abort 00:45:02.114 ************************************ 00:45:02.114 00:18:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:02.114 00:18:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:45:02.114 00:18:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:02.114 00:18:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:02.114 ************************************ 00:45:02.114 START TEST kernel_target_abort 00:45:02.114 ************************************ 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:02.114 00:18:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:03.488 Waiting for block devices as requested 00:45:03.488 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:03.488 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:03.488 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:03.747 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:03.747 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:03.747 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:04.006 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:04.006 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:04.006 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:04.006 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:04.006 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:04.265 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:04.265 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:04.265 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:04.265 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:04.524 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:04.524 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:05.092 00:18:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:05.092 No valid GPT data, bailing 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:45:05.092 00:45:05.092 Discovery Log Number of Records 2, Generation counter 2 00:45:05.092 =====Discovery Log Entry 0====== 00:45:05.092 trtype: tcp 00:45:05.092 adrfam: ipv4 00:45:05.092 subtype: current discovery subsystem 00:45:05.092 treq: not specified, sq flow control disable supported 00:45:05.092 portid: 1 00:45:05.092 trsvcid: 4420 00:45:05.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:05.092 traddr: 10.0.0.1 00:45:05.092 eflags: none 00:45:05.092 sectype: none 00:45:05.092 =====Discovery Log Entry 1====== 00:45:05.092 trtype: tcp 00:45:05.092 adrfam: ipv4 00:45:05.092 subtype: nvme subsystem 00:45:05.092 treq: not specified, sq flow control disable supported 00:45:05.092 portid: 1 00:45:05.092 trsvcid: 4420 00:45:05.092 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:05.092 traddr: 10.0.0.1 00:45:05.092 eflags: none 00:45:05.092 sectype: none 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:05.092 00:18:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:08.374 Initializing NVMe Controllers 00:45:08.374 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:08.374 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:08.374 Initialization complete. Launching workers. 00:45:08.374 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37873, failed: 0 00:45:08.374 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37873, failed to submit 0 00:45:08.374 success 0, unsuccessful 37873, failed 0 00:45:08.374 00:18:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:08.374 00:18:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:11.654 Initializing NVMe Controllers 00:45:11.654 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:11.654 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:11.654 Initialization complete. Launching workers. 00:45:11.654 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67630, failed: 0 00:45:11.654 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17054, failed to submit 50576 00:45:11.654 success 0, unsuccessful 17054, failed 0 00:45:11.654 00:18:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:11.654 00:18:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:14.942 Initializing NVMe Controllers 00:45:14.942 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:14.942 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:14.942 Initialization complete. Launching workers. 00:45:14.942 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63330, failed: 0 00:45:14.942 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15826, failed to submit 47504 00:45:14.942 success 0, unsuccessful 15826, failed 0 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:14.942 00:18:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:15.875 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:15.875 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:15.875 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:15.875 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:15.875 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:15.875 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:15.875 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:16.134 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:16.134 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:16.134 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:17.072 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:17.072 00:45:17.072 real 0m14.847s 00:45:17.072 user 0m7.327s 00:45:17.072 sys 0m3.415s 00:45:17.072 00:18:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:17.072 00:18:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:17.072 ************************************ 00:45:17.072 END TEST kernel_target_abort 00:45:17.072 ************************************ 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:17.072 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:17.072 rmmod nvme_tcp 00:45:17.072 rmmod nvme_fabrics 00:45:17.073 rmmod nvme_keyring 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3720728 ']' 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3720728 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3720728 ']' 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3720728 00:45:17.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3720728) - No such process 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3720728 is not found' 00:45:17.073 Process with pid 3720728 is not found 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:17.073 00:18:43 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:18.447 Waiting for block devices as requested 00:45:18.447 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:18.447 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:18.447 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:18.447 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:18.706 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:18.706 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:18.706 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:18.706 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:18.706 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:18.963 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:18.963 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:18.963 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:18.963 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:19.221 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:19.221 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:19.221 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:19.221 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:19.479 00:18:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:21.382 00:18:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:21.382 00:45:21.382 real 0m40.269s 00:45:21.382 user 1m9.958s 00:45:21.382 sys 0m9.538s 00:45:21.382 00:18:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:21.382 00:18:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:21.382 ************************************ 00:45:21.382 END TEST nvmf_abort_qd_sizes 00:45:21.382 ************************************ 00:45:21.640 00:18:47 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:21.640 00:18:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:45:21.640 00:18:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:21.640 00:18:47 -- common/autotest_common.sh@10 -- # set +x 00:45:21.640 ************************************ 00:45:21.640 START TEST keyring_file 00:45:21.640 ************************************ 00:45:21.640 00:18:47 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:21.640 * Looking for test storage... 00:45:21.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:21.640 00:18:47 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:21.640 00:18:47 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:45:21.640 00:18:47 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:21.640 00:18:47 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:21.640 00:18:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:21.641 00:18:47 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:21.641 00:18:47 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.641 --rc genhtml_branch_coverage=1 00:45:21.641 --rc genhtml_function_coverage=1 00:45:21.641 --rc genhtml_legend=1 00:45:21.641 --rc geninfo_all_blocks=1 00:45:21.641 --rc geninfo_unexecuted_blocks=1 00:45:21.641 00:45:21.641 ' 00:45:21.641 00:18:47 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.641 --rc genhtml_branch_coverage=1 00:45:21.641 --rc genhtml_function_coverage=1 00:45:21.641 --rc genhtml_legend=1 00:45:21.641 --rc geninfo_all_blocks=1 00:45:21.641 --rc geninfo_unexecuted_blocks=1 00:45:21.641 00:45:21.641 ' 00:45:21.641 00:18:47 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.641 --rc genhtml_branch_coverage=1 00:45:21.641 --rc genhtml_function_coverage=1 00:45:21.641 --rc genhtml_legend=1 00:45:21.641 --rc geninfo_all_blocks=1 00:45:21.641 --rc geninfo_unexecuted_blocks=1 00:45:21.641 00:45:21.641 ' 00:45:21.641 00:18:47 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:21.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:21.641 --rc genhtml_branch_coverage=1 00:45:21.641 --rc genhtml_function_coverage=1 00:45:21.641 --rc genhtml_legend=1 00:45:21.641 --rc geninfo_all_blocks=1 00:45:21.641 --rc geninfo_unexecuted_blocks=1 00:45:21.641 00:45:21.641 ' 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:21.641 00:18:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:21.641 00:18:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.641 00:18:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.641 00:18:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.641 00:18:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:21.641 00:18:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:21.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zMBmTeBlal 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zMBmTeBlal 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zMBmTeBlal 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zMBmTeBlal 00:45:21.641 00:18:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lc3yRihOfV 00:45:21.641 00:18:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:21.641 00:18:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:21.900 00:18:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lc3yRihOfV 00:45:21.900 00:18:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lc3yRihOfV 00:45:21.900 00:18:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lc3yRihOfV 00:45:21.900 00:18:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=3726959 00:45:21.900 00:18:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:21.900 00:18:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3726959 00:45:21.900 00:18:47 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3726959 ']' 00:45:21.900 00:18:47 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:21.900 00:18:47 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:21.900 00:18:47 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:21.900 00:18:47 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:21.900 00:18:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:21.900 [2024-11-10 00:18:47.954786] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:45:21.900 [2024-11-10 00:18:47.954949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726959 ] 00:45:22.158 [2024-11-10 00:18:48.111041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:22.158 [2024-11-10 00:18:48.249957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:45:23.091 00:18:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:23.091 [2024-11-10 00:18:49.171323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:23.091 null0 00:45:23.091 [2024-11-10 00:18:49.203348] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:23.091 [2024-11-10 00:18:49.203971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:23.091 00:18:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:23.091 [2024-11-10 00:18:49.231395] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:23.091 request: 00:45:23.091 { 00:45:23.091 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:23.091 "secure_channel": false, 00:45:23.091 "listen_address": { 00:45:23.091 "trtype": "tcp", 00:45:23.091 "traddr": "127.0.0.1", 00:45:23.091 "trsvcid": "4420" 00:45:23.091 }, 00:45:23.091 "method": "nvmf_subsystem_add_listener", 00:45:23.091 "req_id": 1 00:45:23.091 } 00:45:23.091 Got JSON-RPC error response 00:45:23.091 response: 00:45:23.091 { 00:45:23.091 "code": -32602, 00:45:23.091 "message": "Invalid parameters" 00:45:23.091 } 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:23.091 00:18:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=3727099 00:45:23.091 00:18:49 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:23.091 00:18:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3727099 /var/tmp/bperf.sock 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3727099 ']' 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:23.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:23.091 00:18:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:23.350 [2024-11-10 00:18:49.321252] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:45:23.350 [2024-11-10 00:18:49.321380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727099 ] 00:45:23.350 [2024-11-10 00:18:49.463571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:23.610 [2024-11-10 00:18:49.600519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:24.179 00:18:50 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:24.179 00:18:50 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:45:24.179 00:18:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:24.179 00:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:24.437 00:18:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lc3yRihOfV 00:45:24.437 00:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lc3yRihOfV 00:45:24.696 00:18:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:24.696 00:18:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:24.696 00:18:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.696 00:18:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:24.696 00:18:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.954 00:18:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.zMBmTeBlal == \/\t\m\p\/\t\m\p\.\z\M\B\m\T\e\B\l\a\l ]] 00:45:24.954 00:18:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:24.954 00:18:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:24.954 00:18:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.954 00:18:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:24.954 00:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.212 00:18:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.lc3yRihOfV == \/\t\m\p\/\t\m\p\.\l\c\3\y\R\i\h\O\f\V ]] 00:45:25.212 00:18:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:25.212 00:18:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.212 00:18:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.212 00:18:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.212 00:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.212 00:18:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.470 00:18:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:25.470 00:18:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:25.470 00:18:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:25.470 00:18:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.470 00:18:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.470 00:18:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:25.470 00:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.036 00:18:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:26.036 00:18:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.036 00:18:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.036 [2024-11-10 00:18:52.206817] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:26.295 nvme0n1 00:45:26.295 00:18:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:26.295 00:18:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:26.295 00:18:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.295 00:18:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.295 00:18:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.295 00:18:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.552 00:18:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:26.553 00:18:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:26.553 00:18:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:26.553 00:18:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.553 00:18:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.553 00:18:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:26.553 00:18:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.811 00:18:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:26.811 00:18:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:26.811 Running I/O for 1 seconds... 00:45:28.251 6468.00 IOPS, 25.27 MiB/s 00:45:28.251 Latency(us) 00:45:28.251 [2024-11-09T23:18:54.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:28.251 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:28.251 nvme0n1 : 1.01 6520.95 25.47 0.00 0.00 19531.97 6844.87 30098.01 00:45:28.251 [2024-11-09T23:18:54.452Z] =================================================================================================================== 00:45:28.252 [2024-11-09T23:18:54.453Z] Total : 6520.95 25.47 0.00 0.00 19531.97 6844.87 30098.01 00:45:28.252 { 00:45:28.252 "results": [ 00:45:28.252 { 00:45:28.252 "job": "nvme0n1", 00:45:28.252 "core_mask": "0x2", 00:45:28.252 "workload": "randrw", 00:45:28.252 "percentage": 50, 00:45:28.252 "status": "finished", 00:45:28.252 "queue_depth": 128, 00:45:28.252 "io_size": 4096, 00:45:28.252 "runtime": 1.011509, 00:45:28.252 "iops": 6520.950382052953, 00:45:28.252 "mibps": 25.472462429894346, 00:45:28.252 "io_failed": 0, 00:45:28.252 "io_timeout": 0, 00:45:28.252 "avg_latency_us": 19531.96684859511, 00:45:28.252 "min_latency_us": 6844.8711111111115, 00:45:28.252 "max_latency_us": 30098.014814814815 00:45:28.252 } 00:45:28.252 ], 00:45:28.252 "core_count": 1 00:45:28.252 } 00:45:28.252 00:18:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:28.252 00:18:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:28.252 00:18:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:28.252 00:18:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:28.252 00:18:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.252 00:18:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.252 00:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.252 00:18:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:28.515 00:18:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:28.515 00:18:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:28.515 00:18:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:28.515 00:18:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:28.515 00:18:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:28.515 00:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:28.515 00:18:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:28.772 00:18:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:28.772 00:18:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.772 00:18:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:28.772 00:18:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.772 00:18:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:28.773 00:18:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:28.773 00:18:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:28.773 00:18:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:28.773 00:18:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:28.773 00:18:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:29.030 [2024-11-10 00:18:55.090471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:29.030 [2024-11-10 00:18:55.090684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:29.030 [2024-11-10 00:18:55.091646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:29.030 [2024-11-10 00:18:55.092642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:29.030 [2024-11-10 00:18:55.092673] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:29.030 [2024-11-10 00:18:55.092695] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:29.030 [2024-11-10 00:18:55.092719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:29.030 request: 00:45:29.030 { 00:45:29.030 "name": "nvme0", 00:45:29.030 "trtype": "tcp", 00:45:29.030 "traddr": "127.0.0.1", 00:45:29.030 "adrfam": "ipv4", 00:45:29.030 "trsvcid": "4420", 00:45:29.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:29.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:29.030 "prchk_reftag": false, 00:45:29.030 "prchk_guard": false, 00:45:29.030 "hdgst": false, 00:45:29.030 "ddgst": false, 00:45:29.030 "psk": "key1", 00:45:29.030 "allow_unrecognized_csi": false, 00:45:29.030 "method": "bdev_nvme_attach_controller", 00:45:29.030 "req_id": 1 00:45:29.030 } 00:45:29.030 Got JSON-RPC error response 00:45:29.030 response: 00:45:29.030 { 00:45:29.030 "code": -5, 00:45:29.030 "message": "Input/output error" 00:45:29.030 } 00:45:29.030 00:18:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:29.030 00:18:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:29.030 00:18:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:29.030 00:18:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:29.030 00:18:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:29.030 00:18:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.030 00:18:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.030 00:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.030 00:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:29.030 00:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.287 00:18:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:29.287 00:18:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:29.287 00:18:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:29.287 00:18:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.287 00:18:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.287 00:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.287 00:18:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:29.544 00:18:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:29.545 00:18:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:29.545 00:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:29.802 00:18:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:29.802 00:18:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:30.059 00:18:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:30.059 00:18:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:30.059 00:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.316 00:18:56 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:30.316 00:18:56 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.zMBmTeBlal 00:45:30.316 00:18:56 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:30.316 00:18:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:30.316 00:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:30.572 [2024-11-10 00:18:56.752951] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zMBmTeBlal': 0100660 00:45:30.572 [2024-11-10 00:18:56.753001] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:30.572 request: 00:45:30.572 { 00:45:30.572 "name": "key0", 00:45:30.572 "path": "/tmp/tmp.zMBmTeBlal", 00:45:30.572 "method": "keyring_file_add_key", 00:45:30.572 "req_id": 1 00:45:30.572 } 00:45:30.572 Got JSON-RPC error response 00:45:30.572 response: 00:45:30.572 { 00:45:30.572 "code": -1, 00:45:30.572 "message": "Operation not permitted" 00:45:30.572 } 00:45:30.572 00:18:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:30.572 00:18:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:30.572 00:18:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:30.572 00:18:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:30.572 00:18:56 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.zMBmTeBlal 00:45:30.830 00:18:56 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:30.830 00:18:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zMBmTeBlal 00:45:31.087 00:18:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.zMBmTeBlal 00:45:31.087 00:18:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:31.087 00:18:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:31.087 00:18:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.087 00:18:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.087 00:18:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:31.087 00:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.345 00:18:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:31.345 00:18:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:31.345 00:18:57 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.345 00:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:31.604 [2024-11-10 00:18:57.579289] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zMBmTeBlal': No such file or directory 00:45:31.604 [2024-11-10 00:18:57.579341] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:31.604 [2024-11-10 00:18:57.579378] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:31.604 [2024-11-10 00:18:57.579416] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:31.604 [2024-11-10 00:18:57.579435] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:31.604 [2024-11-10 00:18:57.579453] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:31.604 request: 00:45:31.604 { 00:45:31.604 "name": "nvme0", 00:45:31.604 "trtype": "tcp", 00:45:31.604 "traddr": "127.0.0.1", 00:45:31.604 "adrfam": "ipv4", 00:45:31.604 "trsvcid": "4420", 00:45:31.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:31.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:31.604 "prchk_reftag": false, 00:45:31.604 "prchk_guard": false, 00:45:31.604 "hdgst": false, 00:45:31.604 "ddgst": false, 00:45:31.604 "psk": "key0", 00:45:31.604 "allow_unrecognized_csi": false, 00:45:31.604 "method": "bdev_nvme_attach_controller", 00:45:31.604 "req_id": 1 00:45:31.604 } 00:45:31.604 Got JSON-RPC error response 00:45:31.604 response: 00:45:31.604 { 00:45:31.604 "code": -19, 00:45:31.604 "message": "No such device" 00:45:31.604 } 00:45:31.604 00:18:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:31.604 00:18:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:31.604 00:18:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:31.604 00:18:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:31.604 00:18:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:31.604 00:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:31.863 00:18:57 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HMXYK8CF35 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:31.863 00:18:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:31.863 00:18:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:31.863 00:18:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:31.863 00:18:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:31.863 00:18:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:31.863 00:18:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HMXYK8CF35 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HMXYK8CF35 00:45:31.863 00:18:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.HMXYK8CF35 00:45:31.863 00:18:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMXYK8CF35 00:45:31.863 00:18:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMXYK8CF35 00:45:32.121 00:18:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.121 00:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:32.381 nvme0n1 00:45:32.381 00:18:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:32.381 00:18:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:32.381 00:18:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:32.381 00:18:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:32.381 00:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.382 00:18:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:32.642 00:18:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:32.642 00:18:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:32.642 00:18:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:33.208 00:18:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:33.208 00:18:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:33.208 00:18:59 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:33.208 00:18:59 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:33.208 00:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:33.466 00:18:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:33.466 00:18:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:33.466 00:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:34.030 00:18:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:34.031 00:18:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:34.031 00:18:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:34.031 00:19:00 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:34.031 00:19:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMXYK8CF35 00:45:34.031 00:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMXYK8CF35 00:45:34.595 00:19:00 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lc3yRihOfV 00:45:34.595 00:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lc3yRihOfV 00:45:34.595 00:19:00 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:34.595 00:19:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:35.161 nvme0n1 00:45:35.161 00:19:01 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:35.161 00:19:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:35.420 00:19:01 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:35.420 "subsystems": [ 00:45:35.420 { 00:45:35.420 "subsystem": "keyring", 00:45:35.420 "config": [ 00:45:35.420 { 00:45:35.420 "method": "keyring_file_add_key", 00:45:35.420 "params": { 00:45:35.420 "name": "key0", 00:45:35.420 "path": "/tmp/tmp.HMXYK8CF35" 00:45:35.420 } 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "method": "keyring_file_add_key", 00:45:35.420 "params": { 00:45:35.420 "name": "key1", 00:45:35.420 "path": "/tmp/tmp.lc3yRihOfV" 00:45:35.420 } 00:45:35.420 } 00:45:35.420 ] 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "subsystem": "iobuf", 00:45:35.420 "config": [ 00:45:35.420 { 00:45:35.420 "method": "iobuf_set_options", 00:45:35.420 "params": { 00:45:35.420 "small_pool_count": 8192, 00:45:35.420 "large_pool_count": 1024, 00:45:35.420 "small_bufsize": 8192, 00:45:35.420 "large_bufsize": 135168, 00:45:35.420 "enable_numa": false 00:45:35.420 } 00:45:35.420 } 00:45:35.420 ] 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "subsystem": "sock", 00:45:35.420 "config": [ 00:45:35.420 { 00:45:35.420 "method": "sock_set_default_impl", 00:45:35.420 "params": { 00:45:35.420 "impl_name": "posix" 00:45:35.420 } 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "method": "sock_impl_set_options", 00:45:35.420 "params": { 00:45:35.420 "impl_name": "ssl", 00:45:35.420 "recv_buf_size": 4096, 00:45:35.420 "send_buf_size": 4096, 00:45:35.420 "enable_recv_pipe": true, 00:45:35.420 "enable_quickack": false, 00:45:35.420 "enable_placement_id": 0, 00:45:35.420 "enable_zerocopy_send_server": true, 00:45:35.420 "enable_zerocopy_send_client": false, 00:45:35.420 "zerocopy_threshold": 0, 00:45:35.420 "tls_version": 0, 00:45:35.420 "enable_ktls": false 00:45:35.420 } 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "method": "sock_impl_set_options", 00:45:35.420 "params": { 00:45:35.420 "impl_name": "posix", 00:45:35.420 "recv_buf_size": 2097152, 00:45:35.420 "send_buf_size": 2097152, 00:45:35.420 "enable_recv_pipe": true, 00:45:35.420 "enable_quickack": false, 00:45:35.420 "enable_placement_id": 0, 00:45:35.420 "enable_zerocopy_send_server": true, 00:45:35.420 "enable_zerocopy_send_client": false, 00:45:35.420 "zerocopy_threshold": 0, 00:45:35.420 "tls_version": 0, 00:45:35.420 "enable_ktls": false 00:45:35.420 } 00:45:35.420 } 00:45:35.420 ] 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "subsystem": "vmd", 00:45:35.420 "config": [] 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "subsystem": "accel", 00:45:35.420 "config": [ 00:45:35.420 { 00:45:35.420 "method": "accel_set_options", 00:45:35.420 "params": { 00:45:35.420 "small_cache_size": 128, 00:45:35.420 "large_cache_size": 16, 00:45:35.420 "task_count": 2048, 00:45:35.420 "sequence_count": 2048, 00:45:35.420 "buf_count": 2048 00:45:35.420 } 00:45:35.420 } 00:45:35.420 ] 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "subsystem": "bdev", 00:45:35.420 "config": [ 00:45:35.420 { 00:45:35.420 "method": "bdev_set_options", 00:45:35.420 "params": { 00:45:35.420 "bdev_io_pool_size": 65535, 00:45:35.420 "bdev_io_cache_size": 256, 00:45:35.420 "bdev_auto_examine": true, 00:45:35.420 "iobuf_small_cache_size": 128, 00:45:35.420 "iobuf_large_cache_size": 16 00:45:35.420 } 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "method": "bdev_raid_set_options", 00:45:35.420 "params": { 00:45:35.420 "process_window_size_kb": 1024, 00:45:35.420 "process_max_bandwidth_mb_sec": 0 00:45:35.420 } 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "method": "bdev_iscsi_set_options", 00:45:35.420 "params": { 00:45:35.420 "timeout_sec": 30 00:45:35.420 } 00:45:35.420 }, 00:45:35.420 { 00:45:35.420 "method": "bdev_nvme_set_options", 00:45:35.420 "params": { 00:45:35.420 "action_on_timeout": "none", 00:45:35.420 "timeout_us": 0, 00:45:35.420 "timeout_admin_us": 0, 00:45:35.420 "keep_alive_timeout_ms": 10000, 00:45:35.420 "arbitration_burst": 0, 00:45:35.420 "low_priority_weight": 0, 00:45:35.420 "medium_priority_weight": 0, 00:45:35.420 "high_priority_weight": 0, 00:45:35.420 "nvme_adminq_poll_period_us": 10000, 00:45:35.421 "nvme_ioq_poll_period_us": 0, 00:45:35.421 "io_queue_requests": 512, 00:45:35.421 "delay_cmd_submit": true, 00:45:35.421 "transport_retry_count": 4, 00:45:35.421 "bdev_retry_count": 3, 00:45:35.421 "transport_ack_timeout": 0, 00:45:35.421 "ctrlr_loss_timeout_sec": 0, 00:45:35.421 "reconnect_delay_sec": 0, 00:45:35.421 "fast_io_fail_timeout_sec": 0, 00:45:35.421 "disable_auto_failback": false, 00:45:35.421 "generate_uuids": false, 00:45:35.421 "transport_tos": 0, 00:45:35.421 "nvme_error_stat": false, 00:45:35.421 "rdma_srq_size": 0, 00:45:35.421 "io_path_stat": false, 00:45:35.421 "allow_accel_sequence": false, 00:45:35.421 "rdma_max_cq_size": 0, 00:45:35.421 "rdma_cm_event_timeout_ms": 0, 00:45:35.421 "dhchap_digests": [ 00:45:35.421 "sha256", 00:45:35.421 "sha384", 00:45:35.421 "sha512" 00:45:35.421 ], 00:45:35.421 "dhchap_dhgroups": [ 00:45:35.421 "null", 00:45:35.421 "ffdhe2048", 00:45:35.421 "ffdhe3072", 00:45:35.421 "ffdhe4096", 00:45:35.421 "ffdhe6144", 00:45:35.421 "ffdhe8192" 00:45:35.421 ] 00:45:35.421 } 00:45:35.421 }, 00:45:35.421 { 00:45:35.421 "method": "bdev_nvme_attach_controller", 00:45:35.421 "params": { 00:45:35.421 "name": "nvme0", 00:45:35.421 "trtype": "TCP", 00:45:35.421 "adrfam": "IPv4", 00:45:35.421 "traddr": "127.0.0.1", 00:45:35.421 "trsvcid": "4420", 00:45:35.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:35.421 "prchk_reftag": false, 00:45:35.421 "prchk_guard": false, 00:45:35.421 "ctrlr_loss_timeout_sec": 0, 00:45:35.421 "reconnect_delay_sec": 0, 00:45:35.421 "fast_io_fail_timeout_sec": 0, 00:45:35.421 "psk": "key0", 00:45:35.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:35.421 "hdgst": false, 00:45:35.421 "ddgst": false, 00:45:35.421 "multipath": "multipath" 00:45:35.421 } 00:45:35.421 }, 00:45:35.421 { 00:45:35.421 "method": "bdev_nvme_set_hotplug", 00:45:35.421 "params": { 00:45:35.421 "period_us": 100000, 00:45:35.421 "enable": false 00:45:35.421 } 00:45:35.421 }, 00:45:35.421 { 00:45:35.421 "method": "bdev_wait_for_examine" 00:45:35.421 } 00:45:35.421 ] 00:45:35.421 }, 00:45:35.421 { 00:45:35.421 "subsystem": "nbd", 00:45:35.421 "config": [] 00:45:35.421 } 00:45:35.421 ] 00:45:35.421 }' 00:45:35.421 00:19:01 keyring_file -- keyring/file.sh@115 -- # killprocess 3727099 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3727099 ']' 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3727099 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@957 -- # uname 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3727099 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3727099' 00:45:35.421 killing process with pid 3727099 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@971 -- # kill 3727099 00:45:35.421 Received shutdown signal, test time was about 1.000000 seconds 00:45:35.421 00:45:35.421 Latency(us) 00:45:35.421 [2024-11-09T23:19:01.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:35.421 [2024-11-09T23:19:01.622Z] =================================================================================================================== 00:45:35.421 [2024-11-09T23:19:01.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:35.421 00:19:01 keyring_file -- common/autotest_common.sh@976 -- # wait 3727099 00:45:36.355 00:19:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=3728715 00:45:36.355 00:19:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3728715 /var/tmp/bperf.sock 00:45:36.355 00:19:02 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3728715 ']' 00:45:36.355 00:19:02 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:36.355 00:19:02 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:36.355 00:19:02 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:36.355 00:19:02 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:36.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:36.355 00:19:02 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:36.355 00:19:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:36.355 00:19:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:36.355 "subsystems": [ 00:45:36.355 { 00:45:36.355 "subsystem": "keyring", 00:45:36.355 "config": [ 00:45:36.355 { 00:45:36.355 "method": "keyring_file_add_key", 00:45:36.355 "params": { 00:45:36.355 "name": "key0", 00:45:36.355 "path": "/tmp/tmp.HMXYK8CF35" 00:45:36.355 } 00:45:36.355 }, 00:45:36.355 { 00:45:36.355 "method": "keyring_file_add_key", 00:45:36.355 "params": { 00:45:36.355 "name": "key1", 00:45:36.355 "path": "/tmp/tmp.lc3yRihOfV" 00:45:36.355 } 00:45:36.355 } 00:45:36.355 ] 00:45:36.355 }, 00:45:36.355 { 00:45:36.355 "subsystem": "iobuf", 00:45:36.355 "config": [ 00:45:36.355 { 00:45:36.355 "method": "iobuf_set_options", 00:45:36.355 "params": { 00:45:36.355 "small_pool_count": 8192, 00:45:36.355 "large_pool_count": 1024, 00:45:36.355 "small_bufsize": 8192, 00:45:36.355 "large_bufsize": 135168, 00:45:36.355 "enable_numa": false 00:45:36.355 } 00:45:36.355 } 00:45:36.355 ] 00:45:36.355 }, 00:45:36.355 { 00:45:36.355 "subsystem": "sock", 00:45:36.355 "config": [ 00:45:36.355 { 00:45:36.355 "method": "sock_set_default_impl", 00:45:36.355 "params": { 00:45:36.355 "impl_name": "posix" 00:45:36.355 } 00:45:36.355 }, 00:45:36.355 { 00:45:36.355 "method": "sock_impl_set_options", 00:45:36.355 "params": { 00:45:36.355 "impl_name": "ssl", 00:45:36.355 "recv_buf_size": 4096, 00:45:36.355 "send_buf_size": 4096, 00:45:36.355 "enable_recv_pipe": true, 00:45:36.355 "enable_quickack": false, 00:45:36.355 "enable_placement_id": 0, 00:45:36.355 "enable_zerocopy_send_server": true, 00:45:36.355 "enable_zerocopy_send_client": false, 00:45:36.355 "zerocopy_threshold": 0, 00:45:36.355 "tls_version": 0, 00:45:36.355 "enable_ktls": false 00:45:36.355 } 00:45:36.355 }, 00:45:36.355 { 00:45:36.355 "method": "sock_impl_set_options", 00:45:36.356 "params": { 00:45:36.356 "impl_name": "posix", 00:45:36.356 "recv_buf_size": 2097152, 00:45:36.356 "send_buf_size": 2097152, 00:45:36.356 "enable_recv_pipe": true, 00:45:36.356 "enable_quickack": false, 00:45:36.356 "enable_placement_id": 0, 00:45:36.356 "enable_zerocopy_send_server": true, 00:45:36.356 "enable_zerocopy_send_client": false, 00:45:36.356 "zerocopy_threshold": 0, 00:45:36.356 "tls_version": 0, 00:45:36.356 "enable_ktls": false 00:45:36.356 } 00:45:36.356 } 00:45:36.356 ] 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "subsystem": "vmd", 00:45:36.356 "config": [] 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "subsystem": "accel", 00:45:36.356 "config": [ 00:45:36.356 { 00:45:36.356 "method": "accel_set_options", 00:45:36.356 "params": { 00:45:36.356 "small_cache_size": 128, 00:45:36.356 "large_cache_size": 16, 00:45:36.356 "task_count": 2048, 00:45:36.356 "sequence_count": 2048, 00:45:36.356 "buf_count": 2048 00:45:36.356 } 00:45:36.356 } 00:45:36.356 ] 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "subsystem": "bdev", 00:45:36.356 "config": [ 00:45:36.356 { 00:45:36.356 "method": "bdev_set_options", 00:45:36.356 "params": { 00:45:36.356 "bdev_io_pool_size": 65535, 00:45:36.356 "bdev_io_cache_size": 256, 00:45:36.356 "bdev_auto_examine": true, 00:45:36.356 "iobuf_small_cache_size": 128, 00:45:36.356 "iobuf_large_cache_size": 16 00:45:36.356 } 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "method": "bdev_raid_set_options", 00:45:36.356 "params": { 00:45:36.356 "process_window_size_kb": 1024, 00:45:36.356 "process_max_bandwidth_mb_sec": 0 00:45:36.356 } 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "method": "bdev_iscsi_set_options", 00:45:36.356 "params": { 00:45:36.356 "timeout_sec": 30 00:45:36.356 } 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "method": "bdev_nvme_set_options", 00:45:36.356 "params": { 00:45:36.356 "action_on_timeout": "none", 00:45:36.356 "timeout_us": 0, 00:45:36.356 "timeout_admin_us": 0, 00:45:36.356 "keep_alive_timeout_ms": 10000, 00:45:36.356 "arbitration_burst": 0, 00:45:36.356 "low_priority_weight": 0, 00:45:36.356 "medium_priority_weight": 0, 00:45:36.356 "high_priority_weight": 0, 00:45:36.356 "nvme_adminq_poll_period_us": 10000, 00:45:36.356 "nvme_ioq_poll_period_us": 0, 00:45:36.356 "io_queue_requests": 512, 00:45:36.356 "delay_cmd_submit": true, 00:45:36.356 "transport_retry_count": 4, 00:45:36.356 "bdev_retry_count": 3, 00:45:36.356 "transport_ack_timeout": 0, 00:45:36.356 "ctrlr_loss_timeout_sec": 0, 00:45:36.356 "reconnect_delay_sec": 0, 00:45:36.356 "fast_io_fail_timeout_sec": 0, 00:45:36.356 "disable_auto_failback": false, 00:45:36.356 "generate_uuids": false, 00:45:36.356 "transport_tos": 0, 00:45:36.356 "nvme_error_stat": false, 00:45:36.356 "rdma_srq_size": 0, 00:45:36.356 "io_path_stat": false, 00:45:36.356 "allow_accel_sequence": false, 00:45:36.356 "rdma_max_cq_size": 0, 00:45:36.356 "rdma_cm_event_timeout_ms": 0, 00:45:36.356 "dhchap_digests": [ 00:45:36.356 "sha256", 00:45:36.356 "sha384", 00:45:36.356 "sha512" 00:45:36.356 ], 00:45:36.356 "dhchap_dhgroups": [ 00:45:36.356 "null", 00:45:36.356 "ffdhe2048", 00:45:36.356 "ffdhe3072", 00:45:36.356 "ffdhe4096", 00:45:36.356 "ffdhe6144", 00:45:36.356 "ffdhe8192" 00:45:36.356 ] 00:45:36.356 } 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "method": "bdev_nvme_attach_controller", 00:45:36.356 "params": { 00:45:36.356 "name": "nvme0", 00:45:36.356 "trtype": "TCP", 00:45:36.356 "adrfam": "IPv4", 00:45:36.356 "traddr": "127.0.0.1", 00:45:36.356 "trsvcid": "4420", 00:45:36.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:36.356 "prchk_reftag": false, 00:45:36.356 "prchk_guard": false, 00:45:36.356 "ctrlr_loss_timeout_sec": 0, 00:45:36.356 "reconnect_delay_sec": 0, 00:45:36.356 "fast_io_fail_timeout_sec": 0, 00:45:36.356 "psk": "key0", 00:45:36.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:36.356 "hdgst": false, 00:45:36.356 "ddgst": false, 00:45:36.356 "multipath": "multipath" 00:45:36.356 } 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "method": "bdev_nvme_set_hotplug", 00:45:36.356 "params": { 00:45:36.356 "period_us": 100000, 00:45:36.356 "enable": false 00:45:36.356 } 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "method": "bdev_wait_for_examine" 00:45:36.356 } 00:45:36.356 ] 00:45:36.356 }, 00:45:36.356 { 00:45:36.356 "subsystem": "nbd", 00:45:36.356 "config": [] 00:45:36.356 } 00:45:36.356 ] 00:45:36.356 }' 00:45:36.356 [2024-11-10 00:19:02.431475] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:45:36.356 [2024-11-10 00:19:02.431722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728715 ] 00:45:36.614 [2024-11-10 00:19:02.575540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:36.614 [2024-11-10 00:19:02.705731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:37.181 [2024-11-10 00:19:03.161606] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:37.439 00:19:03 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:37.439 00:19:03 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:45:37.439 00:19:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:37.439 00:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.439 00:19:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:37.697 00:19:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:37.697 00:19:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:37.697 00:19:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:37.697 00:19:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:37.697 00:19:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:37.697 00:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.697 00:19:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:37.955 00:19:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:37.955 00:19:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:37.955 00:19:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:37.955 00:19:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:37.956 00:19:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:37.956 00:19:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.956 00:19:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:38.214 00:19:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:38.214 00:19:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:38.214 00:19:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:38.214 00:19:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:38.472 00:19:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:38.472 00:19:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:38.473 00:19:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HMXYK8CF35 /tmp/tmp.lc3yRihOfV 00:45:38.473 00:19:04 keyring_file -- keyring/file.sh@20 -- # killprocess 3728715 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3728715 ']' 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3728715 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@957 -- # uname 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3728715 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3728715' 00:45:38.473 killing process with pid 3728715 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@971 -- # kill 3728715 00:45:38.473 Received shutdown signal, test time was about 1.000000 seconds 00:45:38.473 00:45:38.473 Latency(us) 00:45:38.473 [2024-11-09T23:19:04.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:38.473 [2024-11-09T23:19:04.674Z] =================================================================================================================== 00:45:38.473 [2024-11-09T23:19:04.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:38.473 00:19:04 keyring_file -- common/autotest_common.sh@976 -- # wait 3728715 00:45:39.420 00:19:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3726959 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3726959 ']' 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3726959 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@957 -- # uname 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3726959 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3726959' 00:45:39.420 killing process with pid 3726959 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@971 -- # kill 3726959 00:45:39.420 00:19:05 keyring_file -- common/autotest_common.sh@976 -- # wait 3726959 00:45:41.949 00:45:41.949 real 0m20.289s 00:45:41.949 user 0m45.892s 00:45:41.949 sys 0m3.784s 00:45:41.949 00:19:07 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:41.949 00:19:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:41.949 ************************************ 00:45:41.949 END TEST keyring_file 00:45:41.949 ************************************ 00:45:41.949 00:19:07 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:45:41.949 00:19:07 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:41.949 00:19:07 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:45:41.949 00:19:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:41.949 00:19:07 -- common/autotest_common.sh@10 -- # set +x 00:45:41.949 ************************************ 00:45:41.949 START TEST keyring_linux 00:45:41.949 ************************************ 00:45:41.949 00:19:07 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:41.949 Joined session keyring: 556704933 00:45:41.949 * Looking for test storage... 00:45:41.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:41.949 00:19:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:41.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.949 --rc genhtml_branch_coverage=1 00:45:41.949 --rc genhtml_function_coverage=1 00:45:41.949 --rc genhtml_legend=1 00:45:41.949 --rc geninfo_all_blocks=1 00:45:41.949 --rc geninfo_unexecuted_blocks=1 00:45:41.949 00:45:41.949 ' 00:45:41.949 00:19:08 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:41.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.949 --rc genhtml_branch_coverage=1 00:45:41.949 --rc genhtml_function_coverage=1 00:45:41.949 --rc genhtml_legend=1 00:45:41.949 --rc geninfo_all_blocks=1 00:45:41.949 --rc geninfo_unexecuted_blocks=1 00:45:41.949 00:45:41.950 ' 00:45:41.950 00:19:08 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:41.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.950 --rc genhtml_branch_coverage=1 00:45:41.950 --rc genhtml_function_coverage=1 00:45:41.950 --rc genhtml_legend=1 00:45:41.950 --rc geninfo_all_blocks=1 00:45:41.950 --rc geninfo_unexecuted_blocks=1 00:45:41.950 00:45:41.950 ' 00:45:41.950 00:19:08 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:41.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:41.950 --rc genhtml_branch_coverage=1 00:45:41.950 --rc genhtml_function_coverage=1 00:45:41.950 --rc genhtml_legend=1 00:45:41.950 --rc geninfo_all_blocks=1 00:45:41.950 --rc geninfo_unexecuted_blocks=1 00:45:41.950 00:45:41.950 ' 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:41.950 00:19:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:41.950 00:19:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:41.950 00:19:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:41.950 00:19:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:41.950 00:19:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.950 00:19:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.950 00:19:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.950 00:19:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:41.950 00:19:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:41.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:41.950 00:19:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:41.950 00:19:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:41.950 00:19:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:42.209 /tmp/:spdk-test:key0 00:45:42.209 00:19:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:42.209 00:19:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:42.209 00:19:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:42.209 00:19:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:42.209 00:19:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:42.209 00:19:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:42.209 00:19:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:42.209 00:19:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:42.209 /tmp/:spdk-test:key1 00:45:42.209 00:19:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3729517 00:45:42.209 00:19:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:42.209 00:19:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3729517 00:45:42.209 00:19:08 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3729517 ']' 00:45:42.209 00:19:08 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:42.209 00:19:08 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:42.209 00:19:08 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:42.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:42.209 00:19:08 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:42.209 00:19:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:42.209 [2024-11-10 00:19:08.308989] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:45:42.209 [2024-11-10 00:19:08.309135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729517 ] 00:45:42.468 [2024-11-10 00:19:08.443702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:42.468 [2024-11-10 00:19:08.573268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:45:43.414 00:19:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:43.414 [2024-11-10 00:19:09.480795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:43.414 null0 00:45:43.414 [2024-11-10 00:19:09.512785] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:43.414 [2024-11-10 00:19:09.513362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:43.414 00:19:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:43.414 139551249 00:45:43.414 00:19:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:43.414 962209120 00:45:43.414 00:19:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3729742 00:45:43.414 00:19:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:43.414 00:19:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3729742 /var/tmp/bperf.sock 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3729742 ']' 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:43.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:43.414 00:19:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:43.414 [2024-11-10 00:19:09.613276] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.03.0 initialization... 00:45:43.414 [2024-11-10 00:19:09.613425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729742 ] 00:45:43.671 [2024-11-10 00:19:09.745985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:43.928 [2024-11-10 00:19:09.874437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:44.493 00:19:10 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:44.493 00:19:10 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:45:44.493 00:19:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:44.493 00:19:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:44.752 00:19:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:44.752 00:19:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:45.318 00:19:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:45.318 00:19:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:45.582 [2024-11-10 00:19:11.728722] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:45.842 nvme0n1 00:45:45.842 00:19:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:45.842 00:19:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:45.842 00:19:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:45.842 00:19:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:45.842 00:19:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:45.842 00:19:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:46.100 00:19:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:46.100 00:19:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:46.100 00:19:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:46.100 00:19:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:46.100 00:19:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:46.100 00:19:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:46.100 00:19:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@25 -- # sn=139551249 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 139551249 == \1\3\9\5\5\1\2\4\9 ]] 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 139551249 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:46.359 00:19:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:46.359 Running I/O for 1 seconds... 00:45:47.550 6247.00 IOPS, 24.40 MiB/s 00:45:47.550 Latency(us) 00:45:47.550 [2024-11-09T23:19:13.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:47.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:47.550 nvme0n1 : 1.02 6242.75 24.39 0.00 0.00 20282.53 11747.93 30292.20 00:45:47.550 [2024-11-09T23:19:13.751Z] =================================================================================================================== 00:45:47.550 [2024-11-09T23:19:13.751Z] Total : 6242.75 24.39 0.00 0.00 20282.53 11747.93 30292.20 00:45:47.550 { 00:45:47.550 "results": [ 00:45:47.550 { 00:45:47.550 "job": "nvme0n1", 00:45:47.550 "core_mask": "0x2", 00:45:47.550 "workload": "randread", 00:45:47.550 "status": "finished", 00:45:47.550 "queue_depth": 128, 00:45:47.550 "io_size": 4096, 00:45:47.550 "runtime": 1.021184, 00:45:47.550 "iops": 6242.753509651542, 00:45:47.550 "mibps": 24.385755897076336, 00:45:47.550 "io_failed": 0, 00:45:47.550 "io_timeout": 0, 00:45:47.550 "avg_latency_us": 20282.528657196806, 00:45:47.550 "min_latency_us": 11747.934814814815, 00:45:47.550 "max_latency_us": 30292.195555555554 00:45:47.550 } 00:45:47.550 ], 00:45:47.550 "core_count": 1 00:45:47.550 } 00:45:47.550 00:19:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:47.550 00:19:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:47.808 00:19:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:47.808 00:19:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:47.809 00:19:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:47.809 00:19:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:47.809 00:19:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:47.809 00:19:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:48.066 00:19:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:48.066 00:19:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:48.066 00:19:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:48.066 00:19:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:48.066 00:19:14 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:48.066 00:19:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:48.324 [2024-11-10 00:19:14.345607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:48.324 [2024-11-10 00:19:14.346230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:48.324 [2024-11-10 00:19:14.347203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:48.324 [2024-11-10 00:19:14.348195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:48.324 [2024-11-10 00:19:14.348230] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:48.324 [2024-11-10 00:19:14.348254] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:48.324 [2024-11-10 00:19:14.348287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:48.324 request: 00:45:48.324 { 00:45:48.324 "name": "nvme0", 00:45:48.324 "trtype": "tcp", 00:45:48.324 "traddr": "127.0.0.1", 00:45:48.324 "adrfam": "ipv4", 00:45:48.325 "trsvcid": "4420", 00:45:48.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:48.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:48.325 "prchk_reftag": false, 00:45:48.325 "prchk_guard": false, 00:45:48.325 "hdgst": false, 00:45:48.325 "ddgst": false, 00:45:48.325 "psk": ":spdk-test:key1", 00:45:48.325 "allow_unrecognized_csi": false, 00:45:48.325 "method": "bdev_nvme_attach_controller", 00:45:48.325 "req_id": 1 00:45:48.325 } 00:45:48.325 Got JSON-RPC error response 00:45:48.325 response: 00:45:48.325 { 00:45:48.325 "code": -5, 00:45:48.325 "message": "Input/output error" 00:45:48.325 } 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@33 -- # sn=139551249 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 139551249 00:45:48.325 1 links removed 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@33 -- # sn=962209120 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 962209120 00:45:48.325 1 links removed 00:45:48.325 00:19:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3729742 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3729742 ']' 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3729742 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3729742 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3729742' 00:45:48.325 killing process with pid 3729742 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@971 -- # kill 3729742 00:45:48.325 Received shutdown signal, test time was about 1.000000 seconds 00:45:48.325 00:45:48.325 Latency(us) 00:45:48.325 [2024-11-09T23:19:14.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:48.325 [2024-11-09T23:19:14.526Z] =================================================================================================================== 00:45:48.325 [2024-11-09T23:19:14.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:48.325 00:19:14 keyring_linux -- common/autotest_common.sh@976 -- # wait 3729742 00:45:49.257 00:19:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3729517 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3729517 ']' 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3729517 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3729517 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3729517' 00:45:49.257 killing process with pid 3729517 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@971 -- # kill 3729517 00:45:49.257 00:19:15 keyring_linux -- common/autotest_common.sh@976 -- # wait 3729517 00:45:51.785 00:45:51.785 real 0m9.578s 00:45:51.785 user 0m16.719s 00:45:51.785 sys 0m1.916s 00:45:51.785 00:19:17 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:51.785 00:19:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:51.785 ************************************ 00:45:51.785 END TEST keyring_linux 00:45:51.785 ************************************ 00:45:51.785 00:19:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:45:51.785 00:19:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:51.786 00:19:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:45:51.786 00:19:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:51.786 00:19:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:51.786 00:19:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:45:51.786 00:19:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:45:51.786 00:19:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:45:51.786 00:19:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:51.786 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:45:51.786 00:19:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:45:51.786 00:19:17 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:45:51.786 00:19:17 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:45:51.786 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:45:53.167 INFO: APP EXITING 00:45:53.167 INFO: killing all VMs 00:45:53.168 INFO: killing vhost app 00:45:53.168 INFO: EXIT DONE 00:45:54.584 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:54.585 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:54.585 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:54.585 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:54.585 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:54.585 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:54.585 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:54.585 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:54.585 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:54.585 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:54.585 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:54.585 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:54.585 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:54.585 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:54.585 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:54.585 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:54.585 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:55.997 Cleaning 00:45:55.997 Removing: /var/run/dpdk/spdk0/config 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:55.997 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:55.997 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:55.997 Removing: /var/run/dpdk/spdk1/config 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:55.997 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:55.997 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:55.997 Removing: /var/run/dpdk/spdk2/config 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:55.997 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:55.997 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:55.997 Removing: /var/run/dpdk/spdk3/config 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:55.997 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:55.997 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:55.997 Removing: /var/run/dpdk/spdk4/config 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:55.997 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:55.997 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:55.997 Removing: /dev/shm/bdev_svc_trace.1 00:45:55.997 Removing: /dev/shm/nvmf_trace.0 00:45:55.997 Removing: /dev/shm/spdk_tgt_trace.pid3316699 00:45:55.997 Removing: /var/run/dpdk/spdk0 00:45:55.997 Removing: /var/run/dpdk/spdk1 00:45:55.997 Removing: /var/run/dpdk/spdk2 00:45:55.997 Removing: /var/run/dpdk/spdk3 00:45:55.997 Removing: /var/run/dpdk/spdk4 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3313809 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3314942 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3316699 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3317424 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3318374 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3318864 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3319775 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3320026 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3320568 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3321905 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3323088 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3323689 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3324292 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3325015 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3325488 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3326024 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3326434 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3326757 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3327207 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3329973 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3330528 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3330968 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3331192 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3332457 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3332599 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3333833 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3333975 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3334497 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3334668 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3334978 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3335118 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3336159 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3336438 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3336691 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3339283 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3342187 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3349314 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3349723 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3352505 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3352726 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3355817 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3360311 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3362643 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3369871 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3375508 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3376966 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3377773 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3388825 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3391950 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3449992 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3453501 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3457644 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3463998 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3493314 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3496503 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3497683 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3499143 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3499416 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3499692 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3500089 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3501043 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3502998 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3504394 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3505030 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3506975 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3507681 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3508509 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3511290 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3514971 00:45:55.997 Removing: /var/run/dpdk/spdk_pid3514972 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3514973 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3517345 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3519804 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3523353 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3547473 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3550394 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3554415 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3555888 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3557636 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3559609 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3562757 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3565753 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3568526 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3573283 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3573290 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3576328 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3576590 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3576721 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3577002 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3577130 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3578209 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3579506 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3580684 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3581860 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3583043 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3584222 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3588284 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3588815 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3590745 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3591606 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3595605 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3597688 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3601456 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3605169 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3612009 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3616745 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3616749 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3630397 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3631062 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3631726 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3632269 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3633298 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3633921 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3634581 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3635125 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3638022 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3638291 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3642339 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3642532 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3646051 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3648916 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3656444 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3656959 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3659603 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3659882 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3662769 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3666615 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3668886 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3676060 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3681644 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3682957 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3683754 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3695336 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3697858 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3699991 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3705425 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3705487 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3708580 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3710097 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3711620 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3712485 00:45:55.998 Removing: /var/run/dpdk/spdk_pid3714012 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3715001 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3721154 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3721549 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3721944 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3723828 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3724155 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3724514 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3726959 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3727099 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3728715 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3729517 00:45:56.257 Removing: /var/run/dpdk/spdk_pid3729742 00:45:56.257 Clean 00:45:56.257 00:19:22 -- common/autotest_common.sh@1451 -- # return 0 00:45:56.257 00:19:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:45:56.257 00:19:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.257 00:19:22 -- common/autotest_common.sh@10 -- # set +x 00:45:56.257 00:19:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:45:56.257 00:19:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.257 00:19:22 -- common/autotest_common.sh@10 -- # set +x 00:45:56.257 00:19:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:56.257 00:19:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:56.257 00:19:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:56.257 00:19:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:45:56.257 00:19:22 -- spdk/autotest.sh@394 -- # hostname 00:45:56.257 00:19:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:56.515 geninfo: WARNING: invalid characters removed from testname! 00:46:28.602 00:19:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:29.540 00:19:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:32.844 00:19:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:35.383 00:20:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:38.672 00:20:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:41.201 00:20:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:43.730 00:20:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:43.730 00:20:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:43.730 00:20:09 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:43.730 00:20:09 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:43.730 00:20:09 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:43.730 00:20:09 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:43.988 + [[ -n 3242410 ]] 00:46:43.988 + sudo kill 3242410 00:46:43.996 [Pipeline] } 00:46:44.008 [Pipeline] // stage 00:46:44.012 [Pipeline] } 00:46:44.024 [Pipeline] // timeout 00:46:44.028 [Pipeline] } 00:46:44.040 [Pipeline] // catchError 00:46:44.044 [Pipeline] } 00:46:44.057 [Pipeline] // wrap 00:46:44.062 [Pipeline] } 00:46:44.073 [Pipeline] // catchError 00:46:44.080 [Pipeline] stage 00:46:44.082 [Pipeline] { (Epilogue) 00:46:44.093 [Pipeline] catchError 00:46:44.094 [Pipeline] { 00:46:44.105 [Pipeline] echo 00:46:44.107 Cleanup processes 00:46:44.112 [Pipeline] sh 00:46:44.392 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:44.392 3743250 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:44.405 [Pipeline] sh 00:46:44.686 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:44.686 ++ grep -v 'sudo pgrep' 00:46:44.686 ++ awk '{print $1}' 00:46:44.686 + sudo kill -9 00:46:44.686 + true 00:46:44.698 [Pipeline] sh 00:46:44.981 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:57.315 [Pipeline] sh 00:46:57.591 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:57.591 Artifacts sizes are good 00:46:57.606 [Pipeline] archiveArtifacts 00:46:57.614 Archiving artifacts 00:46:57.747 [Pipeline] sh 00:46:58.033 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:58.046 [Pipeline] cleanWs 00:46:58.054 [WS-CLEANUP] Deleting project workspace... 00:46:58.054 [WS-CLEANUP] Deferred wipeout is used... 00:46:58.060 [WS-CLEANUP] done 00:46:58.061 [Pipeline] } 00:46:58.075 [Pipeline] // catchError 00:46:58.084 [Pipeline] sh 00:46:58.360 + logger -p user.info -t JENKINS-CI 00:46:58.368 [Pipeline] } 00:46:58.380 [Pipeline] // stage 00:46:58.385 [Pipeline] } 00:46:58.399 [Pipeline] // node 00:46:58.404 [Pipeline] End of Pipeline 00:46:58.447 Finished: SUCCESS